You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Vinod Kumar Vavilapalli (JIRA)" <ji...@apache.org> on 2015/09/03 00:30:46 UTC

[jira] [Updated] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vinod Kumar Vavilapalli updated MAPREDUCE-5649:
-----------------------------------------------
    Fix Version/s: 2.6.1

Pulled this into 2.6.1. Ran compilation and TestMergeManager before the push. Patch applied cleanly.

> Reduce cannot use more than 2G memory  for the final merge
> ----------------------------------------------------------
>
>                 Key: MAPREDUCE-5649
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>            Reporter: stanley shi
>            Assignee: Gera Shegalov
>              Labels: 2.6.1-candidate, 2.7.2-candidate
>             Fix For: 2.6.1, 2.8.0
>
>         Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
>         Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)