You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Szilard Nemeth (JIRA)" <ji...@apache.org> on 2018/11/03 17:21:00 UTC

[jira] [Comment Edited] (YARN-8059) Resource type is ignored when FS decide to preempt

    [ https://issues.apache.org/jira/browse/YARN-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674127#comment-16674127 ] 

Szilard Nemeth edited comment on YARN-8059 at 11/3/18 5:20 PM:
---------------------------------------------------------------

Thanks [~wilfreds] and [~haibochen] for your comments!

[~wilfreds]: Indeed, it was the case with most of the testcases that memory was defining the number of containers and it was the dominant resource.

With the new patch I fixed this behaviour by adding 5 times the amount of memory for each node, making sure that the custom resource becomes dominant.

[~haibochen]: I didn't like the boolean check either so I separated the custom resource type testcases completely from the others. This generated a lot of changes, though, meaning the size of the patch is increased, but I guess we have eithe the "boolean flag-way" or "more change" way of solving the issue.

It took me a lot of effort to finally figure out what caused the tests to fail when the custom resource became the dominant resource, so I'm summarizing my findings here:

In {{FSPreemptionThread#identifyContainersToPreempt}}, the loop goes through the results of {{FSAppAttempt#getStarvedResourceRequests}}.
 This is all initiated periodically from the Preemption thread. 
 With some debug logs added, I realized that {{FSAppAttempt#getStarvedResourceRequests}} returns an empty list in too many cases, this means containers are not getting preempted. 
 In this method, {{numContainersThatFit}} is calculated by this call:
{code:java}
int numContainersThatFit = (int) Math.floor(
 Resources.ratio(scheduler.getResourceCalculator(),
 pending, rr.getCapability()));{code}
As {{FairScheduler.getResourceCalculator()}} simply returns an instance of {{DefaultResourceCalculator}} which is only checking the ratio by checking the memory value of the 2 Resource objects passed in. This means that custom resources are not checked at all!
 So the first idea was to use the {{DominantResourceCalculator}} instance of FS.
 It had some issues, too. In some cases, where the Resource's custom resource value was null, a division by zero calculation happened, ruining the previous results of calculations of other resource types.
 So, the {{DominantResourceCalculator}} was never checking whether the value it divides by is zero. I think this is debatable whether this worth a new jira or not, but I included it in my patch.

After my fix in {{DominantResourceCalculator}}, I still had issues with the calculations so not all of the tests were green yet.
 There is one special case of the ratio calculation: 
 Let's suppose we have 2 Resources:
 - {{Resource a: <memory:6144, vCores:1>}}
 - {{Resource b: <memory:10240, vCores:1>}}

Let's take an example of a call that computes {{numContainersThatFit}} (pasted above):
 {{pending}} have the value of {{Resource<memory:6144, vCores:1>}} and
 {{rr.getCapability()}} has the value of {{Resource<memory:10240, vCores:1>}}.
 In this case, {{DominantResourceCalculator}} would return 1 because the ratio of the vCores is 1/1 = 1 and dividing 6144 by 10240 produces 0.6, so that 1 is the larger from these 2 numbers, 1 is returned. I think this is wrong in case of preemption calculation, as we want to take all the resources into account.
 I added a very simple check for this, please see the implementation of the newly added {{PreemptionResourceCalculator}}.

 

Please share your thoughts about the latest patch! 


was (Author: snemeth):
Thanks [~wilfreds] and [~haibochen] for your comments!

[~wilfreds]: Indeed, it was the case with most of the testcases that memory was defining the number of containers and it was the dominant resource.

With the new patch I fixed this behaviour by adding 5 times the amount of memory for each node, making sure that the custom resource becomes dominant.

[~haibochen]: I didn't like the boolean check either so I separated the custom resource type testcases completely from the others. This generated a lot of changes, though, meaning the size of the patch is increased, but I guess we have eithe the "boolean flag-way" or "more change" way of solving the issue.

It took me a lot of effort to finally figure out what caused the tests to fail when the custom resource became the dominant resource, so I'm summarizing my findings here:


In \{{FSPreemptionThread#identifyContainersToPreempt}}, the loop goes through the results of \{{FSAppAttempt#getStarvedResourceRequests}}.
This is all initiated periodically from the Preemption thread. 
With some debug logs added, I realized that \{{FSAppAttempt#getStarvedResourceRequests}} returns an empty list in too many cases, this means containers are not getting preempted. 
In this method, \{{numContainersThatFit}} is calculated by this call: {{ int numContainersThatFit = (int) Math.floor(
 Resources.ratio(scheduler.getResourceCalculator(),
 pending, rr.getCapability()));}}
As \{{FairScheduler.getResourceCalculator()}} simply returns an instance of \{{DefaultResourceCalculator}} which is only checking the ratio by checking the memory value of the 2 Resource objects passed in. This means that custom resources are not checked at all!
So the first idea was to use the \{{DominantResourceCalculator}} instance of FS.
It had some issues, too. In some cases, where the Resource's custom resource value was null, a division by zero calculation happened, ruining the previous results of calculations of other resource types.
So, the \{{DominantResourceCalculator}} was never checking whether the value it divides by is zero. I think this is debatable whether this worth a new jira or not, but I included it in my patch.

After my fix in \{{DominantResourceCalculator}}, I still had issues with the calculations so not all of the tests were green yet.
There is one special case of the ratio calculation: 
Let's suppose we have 2 Resources: 
- Resource a: <memory:6144, vCores:1>
- Resource b: <memory:10240, vCores:1>

Let's take an example of a call that computes \{{numContainersThatFit}} (pasted above):
{\{pending}} have the value of \{{Resource<memory:6144, vCores:1>}} and
{\{rr.getCapability()}} has the value of \{{Resource<memory:10240, vCores:1>}}.
In this case, \{{DominantResourceCalculator}} would return 1 because the ratio of the Vcores is 1/1 = 1 and dividing 6144 by 10240 produces 0.6, so that 1 is the larger from these 2 numbers, 1 is returned. I think this is wrong in case of preemption calculation, as we want to take all the resources into account.
I added a very simple check for this, please see the implementation of the newly added \{{PreemptionResourceCalculator}}.

 

> Resource type is ignored when FS decide to preempt
> --------------------------------------------------
>
>                 Key: YARN-8059
>                 URL: https://issues.apache.org/jira/browse/YARN-8059
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 3.0.0
>            Reporter: Yufei Gu
>            Assignee: Szilard Nemeth
>            Priority: Major
>         Attachments: YARN-8059.001.patch, YARN-8059.002.patch, YARN-8059.003.patch, YARN-8059.004.patch, YARN-8059.005.patch, YARN-8059.006.patch
>
>
> Method Fairscheduler#shouldAttemptPreemption doesn't consider resources other than vcore and memory. We may need to rethink it in the resource type scenario. cc [~miklos.szegedi@cloudera.com], [~wilfreds] and [~snemeth].
> {code}
>     if (context.isPreemptionEnabled()) {
>       return (context.getPreemptionUtilizationThreshold() < Math.max(
>           (float) rootMetrics.getAllocatedMB() /
>               getClusterResource().getMemorySize(),
>           (float) rootMetrics.getAllocatedVirtualCores() /
>               getClusterResource().getVirtualCores()));
>     }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org