You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Karthik Kambatla (JIRA)" <ji...@apache.org> on 2014/01/03 18:25:59 UTC

[jira] [Updated] (MAPREDUCE-5689) MRAppMaster does not preempt reducer when scheduled maps cannot be fulfilled

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Karthik Kambatla updated MAPREDUCE-5689:
----------------------------------------

    Summary: MRAppMaster does not preempt reducer when scheduled maps cannot be fulfilled  (was: MRAppMaster does not preempt reducer when scheduled Maps cannot be full filled)

> MRAppMaster does not preempt reducer when scheduled maps cannot be fulfilled
> ----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5689
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5689
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Lohit Vijayarenu
>            Assignee: Lohit Vijayarenu
>            Priority: Critical
>         Attachments: MAPREDUCE-5689.1.patch, MAPREDUCE-5689.2.patch
>
>
> We saw corner case where Jobs running on cluster were hung. Scenario was something like this. Job was running within a pool which was running at its capacity. All available containers were occupied by reducers and last 2 mappers. There were few more reducers waiting to be scheduled in pipeline. 
> At this point two mappers which were running failed and went back to scheduled state. two available containers were assigned to reducers, now whole pool was full of reducers waiting on two maps to be complete. 2 maps never got scheduled because pool was full.
> Ideally reducer preemption should have kicked in to make room for Mappers from this code in RMContaienrAllocator
> {code}
> int completedMaps = getJob().getCompletedMaps();
>     int completedTasks = completedMaps + getJob().getCompletedReduces();
>     if (lastCompletedTasks != completedTasks) {
>       lastCompletedTasks = completedTasks;
>       recalculateReduceSchedule = true;
>     }
>     if (recalculateReduceSchedule) {
>       preemptReducesIfNeeded();
> {code}
> But in this scenario lastCompletedTasks is always completedTasks because maps were never completed. This would cause job to hang forever. As workaround if we kill few reducers, mappers would get scheduled and caused job to complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)