You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Geng Biao <bi...@gmail.com> on 2022/05/18 13:08:18 UTC

答复: Flink Job Execution issue at Yarn

Hi Anitha,

If I understand correctly, your JM/TM process memory is larger than the maximum physical memory(i.e. 40000m > 32*1024=32768m). So for a normally configured YARN cluster, it should be impossible to launch the Flink JM/TM on worker nodes due to the limit of `yarn.scheduler.maximum-allocation-mb` and `yarn.nodemanager.resource.memory-mb`. Maybe it is better to post the JM/TM logs if any of them exists to provide more information.

Best,
Biao Geng

发件人: Anitha Thankappan <an...@quantiphi.com>
日期: 星期三, 2022年5月18日 下午8:26
收件人: user@flink.apache.org <us...@flink.apache.org>
主题: Flink Job Execution issue at Yarn

Hi,

We are using below command to submit a flink application job to GCP dataproc cluster using Yarn.
flink run-application -t yarn-application <jarname>.jar

Our Cluster have 1 master node with 64 GB and 10 worker nodes of 32 GB.
The flink configurations given are:
jobmanager.memory.process.size: 40000m
taskmanager.memory.process.size: 40000m
taskmanager.memory.network.max: 10000m
taskmanager.numberOfTaskSlots: 3
parallelism.default: 10

The issue we were facing is the job hot terminating without any error. And noticed that only one container is got creating at the YARN level. Seems like the job running only on job manger.

Please help on this.

Thanks and Regards,
Anitha Thankappan

This message contains information that may be privileged or confidential and is the property of the Quantiphi Inc and/or its affiliates. It is intended only for the person to whom it is addressed. If you are not the intended recipient, any review, dissemination, distribution, copying, storage or other use of all or any portion of this message is strictly prohibited. If you received this message in error, please immediately notify the sender by reply e-mail and delete this message in its entirety