You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Joseph Niemiec (JIRA)" <ji...@apache.org> on 2014/01/03 18:59:50 UTC

[jira] [Created] (MAPREDUCE-5705) mapreduce.task.io.sort.mb hardcocded cap at 2047

Joseph Niemiec created MAPREDUCE-5705:
-----------------------------------------

             Summary: mapreduce.task.io.sort.mb hardcocded cap at 2047
                 Key: MAPREDUCE-5705
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5705
             Project: Hadoop Map/Reduce
          Issue Type: Bug
    Affects Versions: 2.2.0
         Environment: Multinode Dell XD720 cluster Centos6 running HDP2
            Reporter: Joseph Niemiec


mapreduce.task.io.sort.mb is hardcoded to not allow values larger then 2047. If you enter a value larger then this the map tasks will always crash at this line -

https://github.com/apache/hadoop-mapreduce/blob/HDFS-641/src/java/org/apache/hadoop/mapred/MapTask.java?source=cc#L746

The nodes at dev site have over 380 GB of Ram each, we are not able to make the best use of large mappers (15GB mappers) because of the hardcoded buffer max. Is there a reason this value has been hardcoded? 


--
Also validated on my dev VM. Indeed setting io.sort.mb to 2047 works but 2048 fails. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)