You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Wangda Tan (JIRA)" <ji...@apache.org> on 2015/04/01 20:41:53 UTC
[jira] [Moved] (MAPREDUCE-6302) deadlock in a job between map and
reduce cores allocation
[ https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wangda Tan moved YARN-3416 to MAPREDUCE-6302:
---------------------------------------------
Component/s: (was: fairscheduler)
Affects Version/s: (was: 2.6.0)
2.6.0
Key: MAPREDUCE-6302 (was: YARN-3416)
Project: Hadoop Map/Reduce (was: Hadoop YARN)
> deadlock in a job between map and reduce cores allocation
> ----------------------------------------------------------
>
> Key: MAPREDUCE-6302
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 2.6.0
> Reporter: mai shurong
> Priority: Critical
> Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, queue_with_max163cores.png, queue_with_max263cores.png, queue_with_max333cores.png
>
>
> I submit a big job, which has 500 maps and 350 reduce, to a queue(fairscheduler) with 300 max cores. When the big mapreduce job is running 100% maps, the 300 reduces have occupied 300 max cores in the queue. And then, a map fails and retry, waiting for a core, while the 300 reduces are waiting for failed map to finish. So a deadlock occur. As a result, the job is blocked, and the later job in the queue cannot run because no available cores in the queue.
> I think there is the similar issue for memory of a queue .
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)