You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Jason Lowe (JIRA)" <ji...@apache.org> on 2012/10/19 05:40:04 UTC

[jira] [Updated] (MAPREDUCE-4733) Reducer can fail to make progress during shuffle if too many reducers complete consecutively

     [ https://issues.apache.org/jira/browse/MAPREDUCE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jason Lowe updated MAPREDUCE-4733:
----------------------------------

    Attachment: MAPREDUCE-4733.patch

Patch to add a new method to the AM's Job interface so map task completions can be reliably iterated by reducers.
                
> Reducer can fail to make progress during shuffle if too many reducers complete consecutively
> --------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4733
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4733
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster, mrv2
>    Affects Versions: 0.23.3
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>         Attachments: MAPREDUCE-4733.patch
>
>
> TaskAttemptListenerImpl implements getMapCompletionEvents by calling Job.getTaskAttemptCompletionEvents with the same fromEvent and maxEvents passed in from the reducer and then filtering the result for just map events. We can't filter the task completion event list and expect the caller's "window" into the list to match up.  As soon as a reducer event appears in the list it means we are redundantly sending map completion events that were already seen by the reducer.
> Worst case the reducer will hang if all of the events in the requested window are reducer events.  In that case zero events will be reported back to the caller and it won't bump up fromEvent on the next call.  Reducer then never sees the final map completion events needed to complete the shuffle. This could happen in a case where all maps complete, more than MAX_EVENTS reducers complete consecutively, but some straggling reducers get fetch failures and cause a map to be restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira