You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Xianghao Lu (JIRA)" <ji...@apache.org> on 2019/01/05 04:54:00 UTC

[jira] [Comment Edited] (MAPREDUCE-6944) MR job got hanged forever when some NMs unstable for some time

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734773#comment-16734773 ] 

Xianghao Lu edited comment on MAPREDUCE-6944 at 1/5/19 4:53 AM:
----------------------------------------------------------------

[~Jack-Lee] thanks for your work, and as far as I know, your pull request is similar with my early fix(please see the code below), this will just cover the first case, in which container request or container assign will happen, but in the second case, anything about container will not happen, so when the second case happens, the job will still hang, and my patch above will cover the both case.  am I wrong? what do you think?
 # allocating a container with PRIORITY_MAP to a rescheduled failed map(should be PRIORITY_FAST_FAIL_MAP)
 # a rescheduled failed map is killed or failed without assigned container

{code:java}
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
index 40f62a0..b3f1b33 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
@@ -933,7 +933,7 @@ public class RMContainerAllocator extends RMContainerRequestor
   @VisibleForTesting
   class ScheduledRequests {
     
-    private final LinkedList<TaskAttemptId> earlierFailedMaps = 
+    private final LinkedList<TaskAttemptId> earlierFailedMaps =
       new LinkedList<TaskAttemptId>();
     
     /** Maps from a host to a list of Map tasks with data on the host */
@@ -1138,6 +1138,12 @@ public class RMContainerAllocator extends RMContainerRequestor
 
       assignedRequests.add(allocated, assigned.attemptID);
 
+      // fix bug of asking resource, when allocating a container with PRIORITY_MAP
+      // to a failed map(should be PRIORITY_FAST_FAIL_MAP)
+      if (earlierFailedMaps.size() > 0 && earlierFailedMaps.remove(assigned.attemptID)) {
+        LOG.info("Remove " + assigned.attemptID + " from earlierFailedMaps");
+      }
+
       if (LOG.isDebugEnabled()) {
         LOG.info("Assigned container (" + allocated + ") "
             + " to task " + assigned.attemptID + " on node "
@@ -1233,7 +1239,7 @@ public class RMContainerAllocator extends RMContainerRequestor
             new JobCounterUpdateEvent(assigned.attemptID.getTaskId().getJobId());
           jce.addCounterUpdate(JobCounter.OTHER_LOCAL_MAPS, 1);
           eventHandler.handle(jce);
-          LOG.info("Assigned from earlierFailedMaps");
+          LOG.info("Assigned from earlierFailedMaps: " + tId);
           break;
         }
       }
{code}


was (Author: luxianghao):
[~Jack-Lee] thanks for your work, and as far as I know, your pull request is similar with my early fix(please see the photo), this will just cover the first case, in which container request or container assign will happen, but in the second case, anything about container will not hapopen, so when the second case happens, the job will still hang, and my patch above will cover the both case.  am I wrong? what do you think?

# allocating a container with PRIORITY_MAP to a rescheduled failed map(should be PRIORITY_FAST_FAIL_MAP)
# a rescheduled failed map is killed or failed without assigned container

!image-2019-01-05-12-03-19-887.png!

> MR job got hanged forever when some NMs unstable for some time
> --------------------------------------------------------------
>
>                 Key: MAPREDUCE-6944
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6944
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster, resourcemanager
>            Reporter: YunFan Zhou
>            Priority: Critical
>         Attachments: screenshot-1.png
>
>
> We encountered several jobs in the production environment due to the fact that some of the NM unstable cause one *MAP* of the job to be stuck there, and the job can't finish properly.
> However, the problems we encountered were different from those mentioned in [https://issues.apache.org/jira/browse/MAPREDUCE-6513].  Because in our scenario, all of *MR REDUCEs* does not start executing.
> But when I manually kill the hanged *MAP*, the job will be finished normally.
> {noformat}
> 2017-08-17 12:25:06,548 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,555 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e84_1502793246072_73922_01_015700
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:2218677, vCores:2225>
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 12:25:07,556 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15723 ContRel:26 HostLocal:4575 RackLocal:8121
> {noformat}
> {noformat}
> 2017-08-17 14:49:41,793 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1009 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15724 ContRel:26 HostLocal:4575 RackLocal:8121
> 2017-08-17 14:49:41,794 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Applying ask limit of 1 for priority:5 and capability:<memory:1024, vCores:1>
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1502793246072_73922: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:1711989, vCores:1688> knownNMs=4236
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:1711989, vCores:1688>
> 2017-08-17 14:49:41,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_e84_1502793246072_73922_01_015726, NodeId: bigdata-hdp-apache1960.xg01.diditaxi.com:8041, NodeHttpAddress: bigdata-hdp-apache1960.xg01.diditaxi.com:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 10.93.111.36:8041 }, ] to fast fail map
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_e84_1502793246072_73922_01_015726 to attempt_1502793246072_73922_m_012103_5
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:1727349, vCores:1703>
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 15564
> 2017-08-17 14:49:42,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1009 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:15563 CompletedReds:0 ContAlloc:15725 ContRel:26 HostLocal:4575 RackLocal:8121
> {noformat}
> {noformat}
> !screenshot-1.png!
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org