You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Arun C Murthy (JIRA)" <ji...@apache.org> on 2012/08/31 00:54:07 UTC
[jira] [Commented] (MAPREDUCE-4613) Scheduling of reduce tasks
results in starvation
[ https://issues.apache.org/jira/browse/MAPREDUCE-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13445384#comment-13445384 ]
Arun C Murthy commented on MAPREDUCE-4613:
------------------------------------------
What scheduler are you using? If you are using the default FifoScheduler, you'll run into MAPREDUCE-4299 which isn't committed to hadoop-0.23.1.
> Scheduling of reduce tasks results in starvation
> ------------------------------------------------
>
> Key: MAPREDUCE-4613
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4613
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: scheduler
> Affects Versions: 0.23.1
> Environment: 16 machine cluster
> Reporter: Vasco
>
> If a job has more reduce tasks than there are containers available, then the reduce tasks can occupy all containers causing starvation.
> I understand that the correct behaviour when all containers are taken by reducers while mappers are still pending, is for the running reducers to be "pre-empted". However, pre-emption does not occur.
> A work-around is to set the number of reducers < available containers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira