You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by "Liting Liu (litiliu)" <li...@cisco.com> on 2022/10/12 08:10:32 UTC

fail to mount hadoop-config-volume when using flink-k8s-operator

Hi, community:
  I'm using flink-k8s-operator v1.2.0 to deploy flink job. And the "HADOOP_CONF_DIR" environment variable was setted in the image that i buiilded from flink:1.15.  I found the taskmanager pod was trying to mount a volume named "hadoop-config-volume" from configMap.  But the configMap with the name "hadoop-config-volume" was't created.

Do i need to remove the "HADOOP_CONF_DIR" environment variable in dockerfile?
If yes, what should i do to specify the hadoop conf?


Re: fail to mount hadoop-config-volume when using flink-k8s-operator

Posted by Yang Wang <da...@gmail.com>.
Currently, exporting the env "HADOOP_CONF_DIR" could only work for native
K8s integration. The flink client will try to create the
hadoop-config-volume automatically if hadoop env found.

If you want to set the HADOOP_CONF_DIR in the docker image, please also
make sure the specified hadoop conf directory exists in the image.

For flink-k8s-operator, another feasible solution is to create a
hadoop-config-configmap manually and then use
*"kubernetes.hadoop.conf.config-map.name
<http://kubernetes.hadoop.conf.config-map.name>" *to mount it to JobManager
and TaskManager pods.


Best,
Yang

Liting Liu (litiliu) <li...@cisco.com> 于2022年10月12日周三 16:11写道:

> Hi, community:
>   I'm using flink-k8s-operator v1.2.0 to deploy flink job. And the
> "HADOOP_CONF_DIR" environment variable was setted in the image that i
> buiilded from flink:1.15.  I found the taskmanager pod was trying to mount
> a volume named "hadoop-config-volume" from configMap.  But the configMap
> with the name "hadoop-config-volume" was't created.
>
> Do i need to remove the "HADOOP_CONF_DIR" environment variable in
> dockerfile?
> If yes, what should i do to specify the hadoop conf?
>
>