You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Saptarshi Guha <sa...@gmail.com> on 2009/11/30 17:53:15 UTC

GC overhead limit reached when tasktrackers start

Hello,
While trying to start the task tracker I get the following error in
the logs (see below).
I'm guessing its trying to clean up an aborted job( a badly coded one)
and too many files to clean up.

Does anyone know which directory its looking into so that I manually
clean it up?
Regards
S

==Error==

2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
Can not start task tracker because java.lang.OutOfMemoryError: GC
overhead limit exceeded
        at java.util.Arrays.copyOf(Arrays.java:2882)
        at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
        at java.lang.StringBuilder.append(StringBuilder.java:203)
        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
        at java.io.File.<init>(File.java:207)
        at java.io.File.listFiles(File.java:1056)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
        at org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
        at org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
        at org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
        at org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
        at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
        at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)

Re: GC overhead limit reached when tasktrackers start

Posted by Saptarshi Guha <sa...@gmail.com>.
Yes, this is it, because the TT's were running fine before the bad job.
I cleared the directory(which took forever) and it worked
Thanks
Saptarshi

On Mon, Nov 30, 2009 at 12:48 PM, Todd Lipcon <to...@cloudera.com> wrote:
> That looks like the gc time overhead limit, not an actual out of memory
> error.
>
> It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
> stopped, feel free to remove everything from in there and try to start
> again.
>
> -Todd
>
> On Mon, Nov 30, 2009 at 9:40 AM, Bill Au <bi...@gmail.com> wrote:
>>
>> Your JVM is running out of heap space so you will need to run it with a
>> bigger max heap size.
>>
>> Bill
>>
>> On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
>> <sa...@gmail.com>wrote:
>>
>> > Hello,
>> > While trying to start the task tracker I get the following error in
>> > the logs (see below).
>> > I'm guessing its trying to clean up an aborted job( a badly coded one)
>> > and too many files to clean up.
>> >
>> > Does anyone know which directory its looking into so that I manually
>> > clean it up?
>> > Regards
>> > S
>> >
>> > ==Error==
>> >
>> > 2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
>> > Can not start task tracker because java.lang.OutOfMemoryError: GC
>> > overhead limit exceeded
>> >        at java.util.Arrays.copyOf(Arrays.java:2882)
>> >        at
>> >
>> > java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
>> >        at
>> > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
>> >        at java.lang.StringBuilder.append(StringBuilder.java:203)
>> >        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
>> >        at java.io.File.<init>(File.java:207)
>> >        at java.io.File.listFiles(File.java:1056)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>> >        at
>> >
>> > org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
>> >        at
>> >
>> > org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
>> >        at
>> > org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
>> >        at
>> > org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
>> >        at
>> > org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
>> >        at
>> > org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
>> >        at
>> > org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>> >
>
>

Re: GC overhead limit reached when tasktrackers start

Posted by Bill Au <bi...@gmail.com>.
The gc overhead limit exceeded error is caused by the heap being almost out
of space. The JVM is spending more than 98% of the total time garbage
collection and less than 2% of the heap is recovered,

Bill

On Mon, Nov 30, 2009 at 12:48 PM, Todd Lipcon <to...@cloudera.com> wrote:

> That looks like the gc time overhead limit, not an actual out of memory
> error.
>
> It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
> stopped, feel free to remove everything from in there and try to start
> again.
>
> -Todd
>
> On Mon, Nov 30, 2009 at 9:40 AM, Bill Au <bi...@gmail.com> wrote:
>
> > Your JVM is running out of heap space so you will need to run it with a
> > bigger max heap size.
> >
> > Bill
> >
> > On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
> > <sa...@gmail.com>wrote:
> >
> > > Hello,
> > > While trying to start the task tracker I get the following error in
> > > the logs (see below).
> > > I'm guessing its trying to clean up an aborted job( a badly coded one)
> > > and too many files to clean up.
> > >
> > > Does anyone know which directory its looking into so that I manually
> > > clean it up?
> > > Regards
> > > S
> > >
> > > ==Error==
> > >
> > > 2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
> > > Can not start task tracker because java.lang.OutOfMemoryError: GC
> > > overhead limit exceeded
> > >        at java.util.Arrays.copyOf(Arrays.java:2882)
> > >        at
> > >
> >
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> > >        at
> > > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
> > >        at java.lang.StringBuilder.append(StringBuilder.java:203)
> > >        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
> > >        at java.io.File.<init>(File.java:207)
> > >        at java.io.File.listFiles(File.java:1056)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at
> > >
> >
> org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
> > >        at
> > >
> >
> org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
> > >        at
> > > org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
> > >        at
> > > org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
> > >        at
> > > org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
> > >        at
> > org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
> > >        at
> > org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
> > >
> >
>

Re: GC overhead limit reached when tasktrackers start

Posted by Todd Lipcon <to...@cloudera.com>.
That looks like the gc time overhead limit, not an actual out of memory
error.

It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
stopped, feel free to remove everything from in there and try to start
again.

-Todd

On Mon, Nov 30, 2009 at 9:40 AM, Bill Au <bi...@gmail.com> wrote:

> Your JVM is running out of heap space so you will need to run it with a
> bigger max heap size.
>
> Bill
>
> On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
> <sa...@gmail.com>wrote:
>
> > Hello,
> > While trying to start the task tracker I get the following error in
> > the logs (see below).
> > I'm guessing its trying to clean up an aborted job( a badly coded one)
> > and too many files to clean up.
> >
> > Does anyone know which directory its looking into so that I manually
> > clean it up?
> > Regards
> > S
> >
> > ==Error==
> >
> > 2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
> > Can not start task tracker because java.lang.OutOfMemoryError: GC
> > overhead limit exceeded
> >        at java.util.Arrays.copyOf(Arrays.java:2882)
> >        at
> >
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> >        at
> > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
> >        at java.lang.StringBuilder.append(StringBuilder.java:203)
> >        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
> >        at java.io.File.<init>(File.java:207)
> >        at java.io.File.listFiles(File.java:1056)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> >        at
> >
> org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
> >        at
> >
> org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
> >        at
> > org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
> >        at
> > org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
> >        at
> > org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
> >        at
> org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
> >        at
> org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
> >
>

Re: GC overhead limit reached when tasktrackers start

Posted by Bill Au <bi...@gmail.com>.
Your JVM is running out of heap space so you will need to run it with a
bigger max heap size.

Bill

On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
<sa...@gmail.com>wrote:

> Hello,
> While trying to start the task tracker I get the following error in
> the logs (see below).
> I'm guessing its trying to clean up an aborted job( a badly coded one)
> and too many files to clean up.
>
> Does anyone know which directory its looking into so that I manually
> clean it up?
> Regards
> S
>
> ==Error==
>
> 2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
> Can not start task tracker because java.lang.OutOfMemoryError: GC
> overhead limit exceeded
>        at java.util.Arrays.copyOf(Arrays.java:2882)
>        at
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
>        at
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
>        at java.lang.StringBuilder.append(StringBuilder.java:203)
>        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
>        at java.io.File.<init>(File.java:207)
>        at java.io.File.listFiles(File.java:1056)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
>        at
> org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
>        at
> org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
>        at
> org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
>        at
> org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
>        at
> org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
>        at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
>        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>