You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/21 19:54:41 UTC

[jira] [Resolved] (MAPREDUCE-154) Mapper runs out of memory

     [ https://issues.apache.org/jira/browse/MAPREDUCE-154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer resolved MAPREDUCE-154.
----------------------------------------

    Resolution: Cannot Reproduce

> Mapper runs out of memory
> -------------------------
>
>                 Key: MAPREDUCE-154
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-154
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>         Environment: Amazon EC2 Extra Large instance (4 cores, 15 GB RAM), Sun Java 6 (1.6.0_10); 1 Master, 4 Slaves (all the same); each Java process takes the argument "-Xmx700m" (2 Java processes per Instance)
>            Reporter: Richard J. Zak
>
> The hadoop job has the task of processing 4 directories in HDFS, each with 15 files.  This is sample data, a test run, before I go to the needed 5 directories of about 800 documents each.  The mapper takes in nearly 200 pages (not files) and throws an OutOfMemory exception.  The largest file is 17 MB.
> If this problem is something on my end and not truly a bug, I apologize.  However, after Googling a bit, I did see many threads of people running out of memory with small data sets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)