You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Flink Jira Bot (Jira)" <ji...@apache.org> on 2022/07/23 22:38:00 UTC

[jira] [Updated] (FLINK-27750) The configuration of JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work

     [ https://issues.apache.org/jira/browse/FLINK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Flink Jira Bot updated FLINK-27750:
-----------------------------------
    Labels: TOTAL_PROCESS_MEMORY jobmanager.memory.process.size stale-major  (was: TOTAL_PROCESS_MEMORY jobmanager.memory.process.size)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized.


> The configuration of JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work
> ---------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-27750
>                 URL: https://issues.apache.org/jira/browse/FLINK-27750
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>    Affects Versions: 1.14.4
>            Reporter: dong
>            Priority: Major
>              Labels: TOTAL_PROCESS_MEMORY, jobmanager.memory.process.size, stale-major
>         Attachments: image-2022-05-24-14-00-39-255.png, image-2022-05-24-14-18-30-063.png
>
>
> By constructing kubernetesClusterDescriptor and Fabric8FlinkKubeClient to deploy kubernetes application mode of job,The code is shown below.
> {code:java}
> //Initialize flinkConfiguration and set options including TOTAL_PROCESS_MEMORY
> Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();
> flinkConfiguration.set(DeploymentOptions.TARGET, KubernetesDeploymentTarget.APPLICATION.getName())
> .set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))
> .set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, KubernetesConfigOptions.ImagePullPolicy.Always)
> .set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024M"))
> .set...;
> //Construct kubernetesClusterDescriptor and Fabric8FlinkKubeClient
>  KubernetesClusterDescriptor kubernetesClusterDescriptor = new KubernetesClusterDescriptor(
>                         flinkConfiguration,
>                         new Fabric8FlinkKubeClient(
>                                 flinkConfiguration,
>                                 new DefaultKubernetesClient(),
>                                 Executors.newFixedThreadPool(2)
>                         )
>                 );
> ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(execArgs, null);
> //deploy kubernetes application mode of job
> ClusterClient<String> clusterClient = kubernetesClusterDescriptor.deployApplicationCluster(
>                         new ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),
>                         applicationConfiguration
>                 ).getClusterClient();
> String clusterId = clusterClient.getClusterId(); {code}
> As above,I set TOTAL_PROCESS_MEMORY to 1024M,The flink UI displays the following memory configuration,which is clearly correct(448+128+256+192=1024).
> !image-2022-05-24-14-00-39-255.png|width=759,height=255!
> But when I turn to JobManager using {_}Kubectl Describe Deployment{_}, I found that the POD memory of JobManager was always 768M, which should have been equal to TOTAL_PROCESS_MEMORY 1024M. And no matter how I adjust TOTAL_PROCESS_MEMORY parameter it doesn't work.
> !image-2022-05-24-14-18-30-063.png!
> The result is a POD OOMkilled when JobManager memory usage exceeds 768M.
> I expect the JobManager pod to be equal to TOTAL_PROCESS_MEMORY, so I can adjust the memory to suit my needs.
> Is there something WRONG with my configuration, or should JobManager's pod take up the same amount of memory as TOTAL_PROCESS_MEMORY?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)