You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2015/08/19 03:10:45 UTC

[jira] [Commented] (YARN-4059) Preemption should delay assignments back to the preempted queue

    [ https://issues.apache.org/jira/browse/YARN-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14702258#comment-14702258 ] 

Hadoop QA commented on YARN-4059:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  2s | Pre-patch trunk compilation is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 31s | There were no new javac warning messages. |
| {color:red}-1{color} | javadoc |  10m 33s | The applied patch generated  1  additional warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  2s | The applied patch generated  4 new checkstyle issues (total was 184, now 188). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 13  line(s) that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 37s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  53m 29s | Tests failed in hadoop-yarn-server-resourcemanager. |
| | |  95m 47s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12751097/YARN-4059.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 71aedfa |
| javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/8878/artifact/patchprocess/diffJavadocWarnings.txt |
| checkstyle |  https://builds.apache.org/job/PreCommit-YARN-Build/8878/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt |
| whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/8878/artifact/patchprocess/whitespace.txt |
| hadoop-yarn-server-resourcemanager test log | https://builds.apache.org/job/PreCommit-YARN-Build/8878/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt |
| Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/8878/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-YARN-Build/8878/console |


This message was automatically generated.

> Preemption should delay assignments back to the preempted queue
> ---------------------------------------------------------------
>
>                 Key: YARN-4059
>                 URL: https://issues.apache.org/jira/browse/YARN-4059
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Chang Li
>            Assignee: Chang Li
>         Attachments: YARN-4059.patch
>
>
> When preempting containers from a queue it can take a while for the other queues to fully consume the resources that were freed up, due to delays waiting for better locality, etc. Those delays can cause the resources to be assigned back to the preempted queue, and then the preemption cycle continues.
> We should consider adding a delay, either based on node heartbeat counts or time, to avoid granting containers to a queue that was recently preempted. The delay should be sufficient to cover the cycles of the preemption monitor, so we won't try to assign containers in-between preemption events for a queue.
> Worst-case scenario for assigning freed resources to other queues is when all the other queues want no locality. No locality means only one container is assigned per heartbeat, so we need to wait for the entire cluster heartbeating in times the number of containers that could run on a single node.
> So the "penalty time" for a queue should be the max of either the preemption monitor cycle time or the amount of time it takes to allocate the cluster with one container per heartbeat. Guessing this will be somewhere around 2 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)