You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Fan Xinpu (JIRA)" <ji...@apache.org> on 2019/01/02 09:57:00 UTC

[jira] [Commented] (FLINK-10848) Flink's Yarn ResourceManager can allocate too many excess containers

    [ https://issues.apache.org/jira/browse/FLINK-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16731900#comment-16731900 ] 

Fan Xinpu commented on FLINK-10848:
-----------------------------------

Yes,there is no perfect chance to remove the ContainerRequest just after it was received by RM.

The scenario is below:
1.On time T, flink requests for n containers.
2.On time T+1, another container fails, flink requests for n+1 containers.
3.On time T+k(k>1), flink receives allocated containers, and remove the according ContainerRequest.
4.On time T+(k+1), another container fails,flink requests for 1 containers.

What I mean is the step 4 is perfect, but the step 2 would request excess containers.

Anyway, as mentioned above, there is no chance to remove ContainerRequest as soon as possible. The workaround provided in this PR is an acceptable solution.

> Flink's Yarn ResourceManager can allocate too many excess containers
> --------------------------------------------------------------------
>
>                 Key: FLINK-10848
>                 URL: https://issues.apache.org/jira/browse/FLINK-10848
>             Project: Flink
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.3, 1.4.2, 1.5.5, 1.6.2
>            Reporter: Shuyi Chen
>            Assignee: Shuyi Chen
>            Priority: Major
>              Labels: pull-request-available
>
> Currently, both the YarnFlinkResourceManager and YarnResourceManager do not call removeContainerRequest() on container allocation success. Because the YARN AM-RM protocol is not a delta protocol (please see YARN-1902), AMRMClient will keep all ContainerRequests that are added and send them to RM.
> In production, we observe the following that verifies the theory: 16 containers are allocated and used upon cluster startup; when a TM is killed, 17 containers are allocated, 1 container is used, and 16 excess containers are returned; when another TM is killed, 18 containers are allocated, 1 container is used, and 17 excess containers are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)