You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Wangda Tan (JIRA)" <ji...@apache.org> on 2015/05/08 01:47:00 UTC

[jira] [Commented] (MAPREDUCE-6302) deadlock in a job between map and reduce cores allocation

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533618#comment-14533618 ] 

Wangda Tan commented on MAPREDUCE-6302:
---------------------------------------

[~kasha],
Thanks for working on this.

Just take a look at your patch, overall approach looks good, some comments about configuration:
{{MR_JOB_REDUCER_FORCE_PREEMPT_DELAY_SEC}}
It is actually not REDUCER_FORCE_PREEMPT_DELAY, it is timeout of mapper allocation to start reducer preemption, I suggest to rename it to be: mapreduce.job.mapper.timeout-to-start-reducer-preemption-ms. I think it's better to use ms instead of sec to better control it.

In addition, do you think should we add a value to let user choose to disable this? For example, -1.

And could you add some tests?

> deadlock in a job between map and reduce cores allocation 
> ----------------------------------------------------------
>
>                 Key: MAPREDUCE-6302
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>            Assignee: Karthik Kambatla
>            Priority: Critical
>         Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, log.txt, mr-6302-prelim.patch, queue_with_max163cores.png, queue_with_max263cores.png, queue_with_max333cores.png
>
>
> I submit a  big job, which has 500 maps and 350 reduce, to a queue(fairscheduler) with 300 max cores. When the big mapreduce job is running 100% maps, the 300 reduces have occupied 300 max cores in the queue. And then, a map fails and retry, waiting for a core, while the 300 reduces are waiting for failed map to finish. So a deadlock occur. As a result, the job is blocked, and the later job in the queue cannot run because no available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)