You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Runping Qi (JIRA)" <ji...@apache.org> on 2007/10/02 23:26:51 UTC

[jira] Created: (HADOOP-1987) Mapper failed due to out of memory

Mapper failed due to out of memory
----------------------------------

                 Key: HADOOP-1987
                 URL: https://issues.apache.org/jira/browse/HADOOP-1987
             Project: Hadoop
          Issue Type: Bug
          Components: mapred
            Reporter: Runping Qi



When a map/reduce job takes block compressed sequence files as input, 
the input data may be expanded significantly in size (a few to tens X, depending on
the compression ratio of the particular data blocks in the files).
This may cause out of memory problem in mappers.

In my case, I set heap space to 1GB.
The mappers started to fail when the accumulated expanded input size reaches above 300MB
 


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.