You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Bai Shen <ba...@gmail.com> on 2012/01/10 16:25:28 UTC

Out of Memory during Reduce Merge

I appear to be having the same problem as this poster.

http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201006.mbox/%3C312639.9108.qm@web114409.mail.gq1.yahoo.com%3E

Unfortunately, the thread seems to end with no resolution.  Does anyone
know if this has been resolved?

I'm using SCM 3.7.1 if that makes a difference.

Re: Out of Memory during Reduce Merge

Posted by Bai Shen <ba...@gmail.com>.
I set the following flags, as suggested, but I'm not seeing any output in
the specified directory.

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data
-XX:+UseConcMarkSweepGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
-XX:MaxPermSize=512m -XX:+PrintTenuringDistribution

Why is the merge trying to load all of the data into memory?

On Tue, Jan 10, 2012 at 10:25 AM, Bai Shen <ba...@gmail.com> wrote:

> I appear to be having the same problem as this poster.
>
>
> http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201006.mbox/%3C312639.9108.qm@web114409.mail.gq1.yahoo.com%3E
>
> Unfortunately, the thread seems to end with no resolution.  Does anyone
> know if this has been resolved?
>
> I'm using SCM 3.7.1 if that makes a difference.
>