You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "David Morávek (Jira)" <ji...@apache.org> on 2021/10/08 16:03:00 UTC

[jira] [Commented] (FLINK-21383) Docker image does not play well together with ConfigMap based flink-conf.yamls

    [ https://issues.apache.org/jira/browse/FLINK-21383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17426252#comment-17426252 ] 

David Morávek commented on FLINK-21383:
---------------------------------------

Possible workaround is to tweak the Kubernetes deployment manifest to create a mutable copy of the mounted ConfigMap in InitContainer. I think this may actually not be a workaround but the correct approach. Example:
{code:java}
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 2
  selector:
    matchLabels:
      app: flink
      component: taskmanager
  template:
    metadata:
      labels:
        app: flink
        component: taskmanager
    spec:
      containers:
      - name: taskmanager
        image: apache/flink:1.14.0-scala_2.12-java8
        env:
        - name: JOB_MANAGER_RPC_ADDRESS
          value: flink-jobmanager
        - name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
          value: "2"
        args: ["taskmanager"]
        ports:
        - containerPort: 6122
          name: rpc
        - containerPort: 6125
          name: query-state
        livenessProbe:
          tcpSocket:
            port: 6122
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
        - name: job-artifacts-volume
          mountPath: /opt/flink/usrlib
        securityContext:
          runAsUser: 9999
      initContainers:
      - name: init-conf-directory
        image: busybox:stable
        command: ['sh', '-c', 'cp -L /opt/flink/conf-readonly/* /opt/flink/conf && chown -R 9999:9999 /opt/flink/conf']
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
        - name: flink-config-readonly-volume
          mountPath: /opt/flink/conf-readonly
      volumes:
      - name: flink-config-volume
        emptyDir: {}
      - name: flink-config-readonly-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j-console.properties
            path: log4j-console.properties
      - name: job-artifacts-volume
        hostPath:
          path: /opt/usrlib
{code}

> Docker image does not play well together with ConfigMap based flink-conf.yamls
> ------------------------------------------------------------------------------
>
>                 Key: FLINK-21383
>                 URL: https://issues.apache.org/jira/browse/FLINK-21383
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes, flink-docker
>    Affects Versions: 1.11.3, 1.12.1, 1.13.0
>            Reporter: Till Rohrmann
>            Priority: Minor
>              Labels: auto-deprioritized-major, usability
>
> Flink's Docker image does not play well together with ConfigMap based flink-conf.yamls. The {{docker-entrypoint.sh}} script offers a few env variables to overwrite configuration values (e.g. {{FLINK_PROPERTIES}}, {{JOB_MANAGER_RPC_ADDRESS}}, etc.). The problem is that the entrypoint script assumes that it can modify the existing {{flink-conf.yaml}}. This is not the case if the {{flink-conf.yaml}} is based on a {{ConfigMap}}.
> Making things worse, failures updating the {{flink-conf.yaml}} are not reported. Moreover, the called {{jobmanager.sh}} and {{taskmanager.sh}} scripts don't support to pass in dynamic configuration properties into the processes.
> I think the problem is that our assumption that we can modify the {{flink-conf.yaml}} does not always hold true. If we updated the final configuration from within the Flink process (dynamic properties and env variables), then this problem could be avoided.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)