You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by siddharth raghuvanshi <tr...@gmail.com> on 2010/10/26 16:44:19 UTC

GC overhead limit exceeded while running Terrior on Hadoop

Hi,

While running Terrior on Hadoop, I am getting the following error again &
again, can someone please point out where the problem is?

attempt_201010252225_0001_m_000009_2: WARN - Error running child
attempt_201010252225_0001_m_000009_2: java.lang.OutOfMemoryError: GC
overhead limit exceeded
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.hadoop.HadoopRunWriter.writeTerm(HadoopRunWriter.java:78)
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.MemoryPostings.writeToWriter(MemoryPostings.java:151)
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.MemoryPostings.finish(MemoryPostings.java:112)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.forceFlush(Hadoop_BasicSinglePassIndexer.java:308)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.closeMap(Hadoop_BasicSinglePassIndexer.java:419)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.close(Hadoop_BasicSinglePassIndexer.java:236)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)

Thanks

Regards
Siddharth

Re: GC overhead limit exceeded while running Terrior on Hadoop

Posted by Hemanth Yamijala <yh...@gmail.com>.
Hi,

On Tue, Oct 26, 2010 at 8:14 PM, siddharth raghuvanshi
<tr...@gmail.com> wrote:
> Hi,
>
> While running Terrior on Hadoop, I am getting the following error again &
> again, can someone please point out where the problem is?
>
> attempt_201010252225_0001_m_000009_2: WARN - Error running child
> attempt_201010252225_0001_m_000009_2: java.lang.OutOfMemoryError: GC
> overhead limit exceeded

This error generally means that your MapReduce program requires more
JVM heap space than has been configured by default. You could refer to
the map/reduce documentation at http://bit.ly/9VAHCT and see if that
helps you. In short, you may have to set up some specific
configuration parameters for your map / reduce tasks to run with more
JVM heap space than the default. Depending on which version of Hadoop
you are using, the names could vary a little, but they should be
present in the relevant documentation.

Thanks
hemanth

> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.hadoop.HadoopRunWriter.writeTerm(HadoopRunWriter.java:78)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.MemoryPostings.writeToWriter(MemoryPostings.java:151)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.structures.indexing.singlepass.MemoryPostings.finish(MemoryPostings.java:112)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.forceFlush(Hadoop_BasicSinglePassIndexer.java:308)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.closeMap(Hadoop_BasicSinglePassIndexer.java:419)
> attempt_201010252225_0001_m_000009_2: at
> org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.close(Hadoop_BasicSinglePassIndexer.java:236)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
> attempt_201010252225_0001_m_000009_2: at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)
>
> Thanks
>
> Regards
> Siddharth
>