You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Harsh J (Resolved) (JIRA)" <ji...@apache.org> on 2011/12/31 09:54:30 UTC

[jira] [Resolved] (MAPREDUCE-13) Mapper failed due to out of memory

     [ https://issues.apache.org/jira/browse/MAPREDUCE-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Harsh J resolved MAPREDUCE-13.
------------------------------

    Resolution: Not A Problem

This isn't a problem anymore. Compressed inputs work nicely at present and is hardly ever the cause of maps going OOME.
                
> Mapper failed due to out of memory
> ----------------------------------
>
>                 Key: MAPREDUCE-13
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-13
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Runping Qi
>
> When a map/reduce job takes block compressed sequence files as input, 
> the input data may be expanded significantly in size (a few to tens X, depending on
> the compression ratio of the particular data blocks in the files).
> This may cause out of memory problem in mappers.
> In my case, I set heap space to 1GB.
> The mappers started to fail when the accumulated expanded input size reaches above 300MB
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira