You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Till Rohrmann (JIRA)" <ji...@apache.org> on 2019/05/07 15:28:00 UTC

[jira] [Resolved] (FLINK-12342) Yarn Resource Manager Acquires Too Many Containers

     [ https://issues.apache.org/jira/browse/FLINK-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Till Rohrmann resolved FLINK-12342.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.8.1
                   1.9.0
                   1.7.3
     Release Note: With Flink 1.9.0 the Yarn heartbeat configuration parameter has been renamed from `yarn.heartbeat-delay` to `yarn.heartbeat.interval`.

Fixed via
1.9.0: 3871d4d2bf19d904252ed2fe8fe7cbad9af2c634
1.8.1: 1e25889796f1fb5e857b51158005e89d7a462595
1.7.3: f221542031a4725040c952c09a5c012b8fed2efb

> Yarn Resource Manager Acquires Too Many Containers
> --------------------------------------------------
>
>                 Key: FLINK-12342
>                 URL: https://issues.apache.org/jira/browse/FLINK-12342
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / YARN
>    Affects Versions: 1.6.4, 1.7.2, 1.8.0
>         Environment: We runs job in Flink release 1.6.3. 
>            Reporter: Zhenqiu Huang
>            Assignee: Zhenqiu Huang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.7.3, 1.9.0, 1.8.1
>
>         Attachments: Screen Shot 2019-04-29 at 12.06.23 AM.png, container.log, flink-1.4.png, flink-1.6.png
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> In currently implementation of YarnFlinkResourceManager, it starts to acquire new container one by one when get request from SlotManager. The mechanism works when job is still, say less than 32 containers. If the job has 256 container, containers can't be immediately allocated and appending requests in AMRMClient will be not removed accordingly. We observe the situation that AMRMClient ask for current pending request + 1 (the new request from slot manager) containers. In this way, during the start time of such job, it asked for 4000+ containers. If there is an external dependency issue happens, for example hdfs access is slow. Then, the whole job will be blocked without getting enough resource and finally killed with SlotManager request timeout.
> Thus, we should use the total number of container asked rather than pending request in AMRMClient as threshold to make decision whether we need to add one more resource request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)