You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Chenyu Zheng <ch...@hulu.com.INVALID> on 2021/06/19 10:32:38 UTC
Flink v1.12.2 Kubernetes Session Mode无法挂载ConfigMap中的log4j.properties
开发者您好,
我最近正在尝试使用Kubernetes Session Mode启动Flink,但是发现无法挂载ConfigMap中的log4j.properties。请问这是一个bug吗?有没有方法绕开这个问题,动态挂载log4j.properties?
我的yaml:
apiVersion: v1
data:
flink-conf.yaml: |-
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
kubernetes.rest-service.exposed.type: ClusterIP
kubernetes.jobmanager.cpu: 1.00
high-availability.storageDir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/ha-backup/
queryable-state.proxy.ports: 6125
kubernetes.service-account: stream-app
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
jobmanager.memory.process.size: 1024m
taskmanager.memory.process.size: 1024m
kubernetes.taskmanager.annotations: cluster-autoscaler.kubernetes.io/safe-to-evict:false
kubernetes.namespace: test123
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 5
kubernetes.taskmanager.cpu: 1.00
state.backend: filesystem
parallelism.default: 4
kubernetes.container.image: cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
kubernetes.taskmanager.labels: capos_id:session-cluster-test,stream-component:jobmanager
state.checkpoints.dir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/checkpoints/
kubernetes.cluster-id: session-cluster-test
kubernetes.jobmanager.annotations: cluster-autoscaler.kubernetes.io/safe-to-evict:false
state.savepoints.dir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/savepoints/
restart-strategy.fixed-delay.delay: 15s
taskmanager.rpc.port: 6122
jobmanager.rpc.address: session-cluster-test-flink-jobmanager
kubernetes.jobmanager.labels: capos_id:session-cluster-test,stream-component:jobmanager
jobmanager.rpc.port: 6123
log4j.properties: |-
logger.kafka.name = org.apache.kafka
logger.hadoop.level = INFO
appender.rolling.type = RollingFile
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
rootLogger = INFO, rolling
logger.akka.name = akka
appender.rolling.strategy.type = DefaultRolloverStrategy
logger.akka.level = INFO
appender.rolling.append = false
logger.hadoop.name = org.apache.hadoop
appender.rolling.fileName = ${sys:log.file}
appender.rolling.policies.type = Policies
rootLogger.appenderRef.rolling.ref = RollingFileAppender
logger.kafka.level = INFO
appender.rolling.name = RollingFileAppender
appender.rolling.layout.type = PatternLayout
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.max = 10
logger.netty.level = OFF
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
kind: ConfigMap
metadata:
labels:
app: session-cluster-test
capos_id: session-cluster-test
name: session-cluster-test-flink-config
namespace: test123
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
capos_id: session-cluster-test
name: session-cluster-test-flink-startup
namespace: test123
spec:
backoffLimit: 6
completions: 1
parallelism: 1
template:
metadata:
annotations:
caposv2.prod.hulu.com/streamAppSavepointId: "0"
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
creationTimestamp: null
labels:
capos_id: session-cluster-test
stream-component: start-up
spec:
containers:
- command:
- ./bin/kubernetes-session.sh
- -Dkubernetes.cluster-id=session-cluster-test
image: cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
imagePullPolicy: IfNotPresent
name: flink-startup
resources: {}
securityContext:
runAsUser: 9999
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/flink/conf
name: flink-config-volume
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: stream-app
serviceAccountName: stream-app
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j.properties
path: log4j.properties
name: session-cluster-test-flink-config
name: flink-config-volume
ttlSecondsAfterFinished: 86400
启动的jobmanager container volume mount没有log4j.properties
volumes:
- configMap:
defaultMode: 420
items:
- key: flink-conf.yaml
path: flink-conf.yaml
name: flink-config-session-cluster-test
name: flink-config-volume
Conf目录下也确实缺少了log配置
root@session-cluster-test-689b595f8f-dg4h6:/opt/flink# ls -l $FLINK_HOME/conf/
total 0
lrwxrwxrwx 1 root root 22 Jun 19 09:23 flink-conf.yaml -> ..data/flink-conf.yaml
经过我的源码阅读,发现问题似乎出现在https://github.com/apache/flink/blob/release-1.13.1/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/FlinkConfMountDecorator.java#L104
此处只把flink-conf.yaml加入了container volume mount
希望您能给予支持,谢谢!
祝好!