You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Subroto Sanyal (Commented) (JIRA)" <ji...@apache.org> on 2011/11/18 05:35:54 UTC
[jira] [Commented] (MAPREDUCE-2324) Job should fail if a reduce
task can't be scheduled anywhere
[ https://issues.apache.org/jira/browse/MAPREDUCE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13152642#comment-13152642 ]
Subroto Sanyal commented on MAPREDUCE-2324:
-------------------------------------------
Hi Todd, Murthy, Robert
The issue targets to fix the problem in case of Reducer.
As per fix that is committed, I can see the check for *ResourceEstimator.getEstimatedReduceInputSize* is removed from *findNewReduceTask*
I have the following questions for the fix committed:
* How about the same problem occurring in case of Mappers?
* Say for example only one TaskTracker is having low disk space, as per the fix we go ahead and assign the Reduce task to it; which ends up in failure. So one failure which could have been reduced by the check.
Regards,
Subroto Sanyal
> Job should fail if a reduce task can't be scheduled anywhere
> ------------------------------------------------------------
>
> Key: MAPREDUCE-2324
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2324
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 0.20.2, 0.20.205.0
> Reporter: Todd Lipcon
> Assignee: Robert Joseph Evans
> Fix For: 0.20.205.0
>
> Attachments: MR-2324-disable-check-v2.patch, MR-2324-security-v1.txt, MR-2324-security-v2.txt, MR-2324-security-v3.patch, MR-2324-secutiry-just-log-v1.patch
>
>
> If there's a reduce task that needs more disk space than is available on any mapred.local.dir in the cluster, that task will stay pending forever. For example, we produced this in a QA cluster by accidentally running terasort with one reducer - since no mapred.local.dir had 1T free, the job remained in pending state for several days. The reason for the "stuck" task wasn't clear from a user perspective until we looked at the JT logs.
> Probably better to just fail the job if a reduce task goes through all TTs and finds that there isn't enough space.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira