You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Wangda Tan (JIRA)" <ji...@apache.org> on 2018/08/15 20:35:00 UTC

[jira] [Comment Edited] (YARN-8668) Inconsistency between capacity and fair scheduler in the aspect of computing node available resource

    [ https://issues.apache.org/jira/browse/YARN-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16581574#comment-16581574 ] 

Wangda Tan edited comment on YARN-8668 at 8/15/18 8:34 PM:
-----------------------------------------------------------

Thanks [~Cyl] for reporting the issue, this is by design in CS. 

Using computeAvailableContainers can get correct result when both DominantResourceCalculator and DefaultResourceCalculator enabled. Using fitsIn(res, res) only works when DominantResourceCalculator is enabled. To me, the correct solution is to use fits(resourceCalculator, res, res)

I don't think fix required in CS.


was (Author: leftnoteasy):
Thanks [~Cyl] for reporting the issue, this is by design in CS. 

Using computeAvailableContainers can get correct result when both DominantResourceCalculator and DefaultResourceCalculator enabled. Using fitsIn only works when DominantResourceCalculator is enabled.

I don't think fix required in CS.

> Inconsistency between capacity and fair scheduler in the aspect of computing node available resource
> ----------------------------------------------------------------------------------------------------
>
>                 Key: YARN-8668
>                 URL: https://issues.apache.org/jira/browse/YARN-8668
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Yeliang Cang
>            Assignee: Yeliang Cang
>            Priority: Major
>              Labels: capacityscheduler
>         Attachments: YARN-8668.001.patch
>
>
> We have observed that given capacityScheduler and defaultResourceCalculor,  when there are many memory resources in a node, running heavy workload, then the available vcores of this node will be negative!
> I noticed that in capacityScheduler.java, use code below to calculate the available resources for allocating containers:
> {code}
> if (calculator.computeAvailableContainers(Resources
>  .add(node.getUnallocatedResource(), node.getTotalKillableResources()),
>  minimumAllocation) <= 0) {
>  if (LOG.isDebugEnabled()) {
>  LOG.debug("This node or this node partition doesn't have available or"
>  + "killable resource");
>  }
> {code}
> while in fairscheduler FsAppAttempt.java, similar code was found:
> {code}
> // Can we allocate a container on this node?
> if (Resources.fitsIn(capability, available)) {
> ...
> }
> {code}
> Why is the inconsistency? I think we should use Resources.fitsIn(smaller,bigger) instead in capacityScheduler !!!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org