You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2016/10/06 01:10:22 UTC

[jira] [Commented] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler

    [ https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15550501#comment-15550501 ] 

Hadoop QA commented on YARN-5139:
---------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 11 new or modified test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 32s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 2 new + 1 unchanged - 2 fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 148 new + 1421 unchanged - 157 fixed = 1569 total (was 1578) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 11 new + 0 unchanged - 0 fixed = 11 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s {color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 3 new + 937 unchanged - 1 fixed = 940 total (was 938) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 57s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 20s {color} | {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager |
|  |  Nullcheck of node at line 1414 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(PlacementSet, boolean)  At CapacityScheduler.java:1414 of value previously dereferenced in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(PlacementSet, boolean)  At CapacityScheduler.java:[line 1414] |
|  |  org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run() does not release lock on all exception paths  At CapacityScheduler.java:on all exception paths  At CapacityScheduler.java:[line 532] |
|  |  Unread field:ContainerAllocation.java:[line 61] |
|  |  Unused field:ContainerAllocation.java |
|  |  Read of unwritten field demandingHostLocalNodes in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:[line 119] |
|  |  Read of unwritten field demandingOffswitchNodes in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:[line 111] |
|  |  Read of unwritten field demandingRackLocalNodes in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:in org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.LocalityPlacementSet.getPreferredNodeIterator(PlacementSet)  At LocalityPlacementSet.java:[line 85] |
|  |  Unwritten field:LocalityPlacementSet.java:[line 84] |
|  |  Unwritten field:LocalityPlacementSet.java:[line 86] |
|  |  Unwritten field:LocalityPlacementSet.java:[line 85] |
|  |  Unwritten field:LocalityPlacementSet.java:[line 73] |
| Failed junit tests | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
| Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12831848/YARN-5139.000.patch |
| JIRA Issue | YARN-5139 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  unit  findbugs  checkstyle  |
| uname | Linux 154251729e6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e68c7b9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt |
| checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt |
| findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html |
| javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt |
| unit | https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt |
| unit test logs |  https://builds.apache.org/job/PreCommit-YARN-Build/13301/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt |
|  Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/13301/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager |
| Console output | https://builds.apache.org/job/PreCommit-YARN-Build/13301/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [Umbrella] Move YARN scheduler towards global scheduler
> -------------------------------------------------------
>
>                 Key: YARN-5139
>                 URL: https://issues.apache.org/jira/browse/YARN-5139
>             Project: Hadoop YARN
>          Issue Type: New Feature
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: Explanantions of Global Scheduling (YARN-5139) Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, YARN-5139.000.patch, wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, wip-3.YARN-5139.patch, wip-4.YARN-5139.patch, wip-5.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to sub-optimal decisions because scheduler can only look at one node at the time when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
>    Go to parentQueue
>       Go to leafQueue
>         for application in leafQueue.applications:
>            for resource-request in application.resource-requests
>               try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node constraints (give me "a && b || c") or anti-affinity (do not allocate HBase regionsevers and Storm workers on the same host), we may need to consider moving YARN scheduler towards global scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org