You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/30 01:36:39 UTC
[jira] [Resolved] (MAPREDUCE-1521) Protection against incorrectly
configured reduces
[ https://issues.apache.org/jira/browse/MAPREDUCE-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Allen Wittenauer resolved MAPREDUCE-1521.
-----------------------------------------
Resolution: Fixed
> Protection against incorrectly configured reduces
> -------------------------------------------------
>
> Key: MAPREDUCE-1521
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1521
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Components: jobtracker
> Reporter: Arun C Murthy
> Assignee: Mahadev konar
> Fix For: 0.22.1
>
> Attachments: MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-0.20-yahoo.patch, MAPREDUCE-1521-trunk.patch, resourceestimator-threshold.txt, resourcestimator-overflow.txt
>
>
> We've seen a fair number of instances where naive users process huge data-sets (>10TB) with badly mis-configured #reduces e.g. 1 reduce.
> This is a significant problem on large clusters since it takes each attempt of the reduce a long time to shuffle and then run into problems such as local disk-space etc. Then it takes 4 such attempts.
> Proposal: Come up with heuristics/configs to fail such jobs early.
> Thoughts?
--
This message was sent by Atlassian JIRA
(v6.2#6252)