You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Junping Du (JIRA)" <ji...@apache.org> on 2017/03/17 09:07:41 UTC

[jira] [Updated] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

     [ https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Junping Du updated YARN-3126:
-----------------------------
    Fix Version/s:     (was: 2.8.0)

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -----------------------------------------------------------------------------
>
>                 Key: YARN-3126
>                 URL: https://issues.apache.org/jira/browse/YARN-3126
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.3.0
>         Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>            Reporter: Xia Hu
>            Assignee: Yufei Gu
>              Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
>             Fix For: trunk-win
>
>         Attachments: resourcelimit-02.patch, resourcelimit.patch, resourcelimit-test.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and spark-on-yarn-cleint model), the queue's usedResources assigned by fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check whether this container would make the queue sources over its max limit. If a queue's usedResource is 13G, the maxResource limit is 16G, then a container which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org