You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by haihong lu <un...@gmail.com> on 2014/03/07 07:34:03 UTC

GC overhead limit exceeded

Hi:

     i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED
Error: Java heap space
14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED
Error: Java heap space
14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml",
  <property>
        <name>mapred.child.java.opts</name>
        <value>-Xmx1024m</value>
  </property>
then another error occurs as below

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED
Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of

2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
-

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4
|- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
-

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr


Container killed on request. Exit code is 143
14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED
Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of

2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
-

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr

|- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
-

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3

Container killed on request. Exit code is 143

at last, the task failed.
Thanks for any help!

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks a lot, the answer is helpful.


On Wed, Mar 12, 2014 at 2:20 PM, divye sheth <di...@gmail.com> wrote:

> Hi Haihong,
>
> Please check out the link below, I believe it should solve your problem.
>
>
> http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
>
> Thanks
> Divye Sheth
>
>
> On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:
>
>> Thanks, even if i had added this parameter, but had no effect.
>>
>>
>> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <unmeshabiju@gmail.com
>> > wrote:
>>
>>> Try to increase the memory for datanode and see.This need to restart
>>> hadoop
>>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>>> This will set the heap to 10gb
>>> You can also add this in start of hadoop-env.sh file
>>>
>>>
>>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> i have tried both of the methods you side, but the problem still
>>>> exists. Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>>> maybe has no effect. I have looked for this parameter in the document of
>>>> hadoop, but did not found it.
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>>> dwivedishashwat@gmail.com> wrote:
>>>>
>>>>> Check this out
>>>>>
>>>>>
>>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>>
>>>>>
>>>>>
>>>>> * Warm Regards_**∞_*
>>>>> * Shashwat Shriparv*
>>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>>
>>>>>> Hi:
>>>>>>
>>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>>> message list as below
>>>>>>
>>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>>
>>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>>> "mapred-site.xml",
>>>>>>   <property>
>>>>>>         <name>mapred.child.java.opts</name>
>>>>>>         <value>-Xmx1024m</value>
>>>>>>   </property>
>>>>>> then another error occurs as below
>>>>>>
>>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>>
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>>
>>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>>
>>>>>> at last, the task failed.
>>>>>> Thanks for any help!
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>> Junior Developer
>>>
>>> http://www.unmeshasreeveni.blogspot.in/
>>>
>>>
>>>
>>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks a lot, the answer is helpful.


On Wed, Mar 12, 2014 at 2:20 PM, divye sheth <di...@gmail.com> wrote:

> Hi Haihong,
>
> Please check out the link below, I believe it should solve your problem.
>
>
> http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
>
> Thanks
> Divye Sheth
>
>
> On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:
>
>> Thanks, even if i had added this parameter, but had no effect.
>>
>>
>> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <unmeshabiju@gmail.com
>> > wrote:
>>
>>> Try to increase the memory for datanode and see.This need to restart
>>> hadoop
>>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>>> This will set the heap to 10gb
>>> You can also add this in start of hadoop-env.sh file
>>>
>>>
>>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> i have tried both of the methods you side, but the problem still
>>>> exists. Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>>> maybe has no effect. I have looked for this parameter in the document of
>>>> hadoop, but did not found it.
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>>> dwivedishashwat@gmail.com> wrote:
>>>>
>>>>> Check this out
>>>>>
>>>>>
>>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>>
>>>>>
>>>>>
>>>>> * Warm Regards_**∞_*
>>>>> * Shashwat Shriparv*
>>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>>
>>>>>> Hi:
>>>>>>
>>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>>> message list as below
>>>>>>
>>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>>
>>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>>> "mapred-site.xml",
>>>>>>   <property>
>>>>>>         <name>mapred.child.java.opts</name>
>>>>>>         <value>-Xmx1024m</value>
>>>>>>   </property>
>>>>>> then another error occurs as below
>>>>>>
>>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>>
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>>
>>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>>
>>>>>> at last, the task failed.
>>>>>> Thanks for any help!
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>> Junior Developer
>>>
>>> http://www.unmeshasreeveni.blogspot.in/
>>>
>>>
>>>
>>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks a lot, the answer is helpful.


On Wed, Mar 12, 2014 at 2:20 PM, divye sheth <di...@gmail.com> wrote:

> Hi Haihong,
>
> Please check out the link below, I believe it should solve your problem.
>
>
> http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
>
> Thanks
> Divye Sheth
>
>
> On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:
>
>> Thanks, even if i had added this parameter, but had no effect.
>>
>>
>> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <unmeshabiju@gmail.com
>> > wrote:
>>
>>> Try to increase the memory for datanode and see.This need to restart
>>> hadoop
>>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>>> This will set the heap to 10gb
>>> You can also add this in start of hadoop-env.sh file
>>>
>>>
>>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> i have tried both of the methods you side, but the problem still
>>>> exists. Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>>> maybe has no effect. I have looked for this parameter in the document of
>>>> hadoop, but did not found it.
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>>> dwivedishashwat@gmail.com> wrote:
>>>>
>>>>> Check this out
>>>>>
>>>>>
>>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>>
>>>>>
>>>>>
>>>>> * Warm Regards_**∞_*
>>>>> * Shashwat Shriparv*
>>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>>
>>>>>> Hi:
>>>>>>
>>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>>> message list as below
>>>>>>
>>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>>
>>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>>> "mapred-site.xml",
>>>>>>   <property>
>>>>>>         <name>mapred.child.java.opts</name>
>>>>>>         <value>-Xmx1024m</value>
>>>>>>   </property>
>>>>>> then another error occurs as below
>>>>>>
>>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>>
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>>
>>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>>
>>>>>> at last, the task failed.
>>>>>> Thanks for any help!
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>> Junior Developer
>>>
>>> http://www.unmeshasreeveni.blogspot.in/
>>>
>>>
>>>
>>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks a lot, the answer is helpful.


On Wed, Mar 12, 2014 at 2:20 PM, divye sheth <di...@gmail.com> wrote:

> Hi Haihong,
>
> Please check out the link below, I believe it should solve your problem.
>
>
> http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
>
> Thanks
> Divye Sheth
>
>
> On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:
>
>> Thanks, even if i had added this parameter, but had no effect.
>>
>>
>> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <unmeshabiju@gmail.com
>> > wrote:
>>
>>> Try to increase the memory for datanode and see.This need to restart
>>> hadoop
>>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>>> This will set the heap to 10gb
>>> You can also add this in start of hadoop-env.sh file
>>>
>>>
>>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> i have tried both of the methods you side, but the problem still
>>>> exists. Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>>> maybe has no effect. I have looked for this parameter in the document of
>>>> hadoop, but did not found it.
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>>> dwivedishashwat@gmail.com> wrote:
>>>>
>>>>> Check this out
>>>>>
>>>>>
>>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>>
>>>>>
>>>>>
>>>>> * Warm Regards_**∞_*
>>>>> * Shashwat Shriparv*
>>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>>
>>>>>> Hi:
>>>>>>
>>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>>> message list as below
>>>>>>
>>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>>> Error: Java heap space
>>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>>> Error: GC overhead limit exceeded
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>>
>>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>>> "mapred-site.xml",
>>>>>>   <property>
>>>>>>         <name>mapred.child.java.opts</name>
>>>>>>         <value>-Xmx1024m</value>
>>>>>>   </property>
>>>>>> then another error occurs as below
>>>>>>
>>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>>
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>>> Container
>>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>>> memory used; 2.7 GB of
>>>>>>
>>>>>> 2.1 GB virtual memory used. Killing container.
>>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>>
>>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>>
>>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>>
>>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>>
>>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>>> -
>>>>>>
>>>>>> Dlog4j.configuration=container-log4j.properties
>>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>>
>>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>>
>>>>>> Container killed on request. Exit code is 143
>>>>>>
>>>>>> at last, the task failed.
>>>>>> Thanks for any help!
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Thanks & Regards*
>>>
>>> Unmesha Sreeveni U.B
>>> Junior Developer
>>>
>>> http://www.unmeshasreeveni.blogspot.in/
>>>
>>>
>>>
>>
>

Re: GC overhead limit exceeded

Posted by divye sheth <di...@gmail.com>.
Hi Haihong,

Please check out the link below, I believe it should solve your problem.

http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

Thanks
Divye Sheth


On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:

> Thanks, even if i had added this parameter, but had no effect.
>
>
> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> Try to increase the memory for datanode and see.This need to restart
>> hadoop
>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>> This will set the heap to 10gb
>> You can also add this in start of hadoop-env.sh file
>>
>>
>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>
>>> i have tried both of the methods you side, but the problem still exists.
>>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>> maybe has no effect. I have looked for this parameter in the document of
>>> hadoop, but did not found it.
>>>
>>>
>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>> dwivedishashwat@gmail.com> wrote:
>>>
>>>> Check this out
>>>>
>>>>
>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>
>>>>
>>>>
>>>> * Warm Regards_**∞_*
>>>> * Shashwat Shriparv*
>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>> message list as below
>>>>>
>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>
>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>> "mapred-site.xml",
>>>>>   <property>
>>>>>         <name>mapred.child.java.opts</name>
>>>>>         <value>-Xmx1024m</value>
>>>>>   </property>
>>>>> then another error occurs as below
>>>>>
>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>> Container
>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>> Container
>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>
>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>>
>>>>> at last, the task failed.
>>>>> Thanks for any help!
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>> Junior Developer
>>
>> http://www.unmeshasreeveni.blogspot.in/
>>
>>
>>
>

Re: GC overhead limit exceeded

Posted by divye sheth <di...@gmail.com>.
Hi Haihong,

Please check out the link below, I believe it should solve your problem.

http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

Thanks
Divye Sheth


On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:

> Thanks, even if i had added this parameter, but had no effect.
>
>
> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> Try to increase the memory for datanode and see.This need to restart
>> hadoop
>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>> This will set the heap to 10gb
>> You can also add this in start of hadoop-env.sh file
>>
>>
>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>
>>> i have tried both of the methods you side, but the problem still exists.
>>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>> maybe has no effect. I have looked for this parameter in the document of
>>> hadoop, but did not found it.
>>>
>>>
>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>> dwivedishashwat@gmail.com> wrote:
>>>
>>>> Check this out
>>>>
>>>>
>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>
>>>>
>>>>
>>>> * Warm Regards_**∞_*
>>>> * Shashwat Shriparv*
>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>> message list as below
>>>>>
>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>
>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>> "mapred-site.xml",
>>>>>   <property>
>>>>>         <name>mapred.child.java.opts</name>
>>>>>         <value>-Xmx1024m</value>
>>>>>   </property>
>>>>> then another error occurs as below
>>>>>
>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>> Container
>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>> Container
>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>
>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>>
>>>>> at last, the task failed.
>>>>> Thanks for any help!
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>> Junior Developer
>>
>> http://www.unmeshasreeveni.blogspot.in/
>>
>>
>>
>

Re: GC overhead limit exceeded

Posted by divye sheth <di...@gmail.com>.
Hi Haihong,

Please check out the link below, I believe it should solve your problem.

http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

Thanks
Divye Sheth


On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:

> Thanks, even if i had added this parameter, but had no effect.
>
>
> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> Try to increase the memory for datanode and see.This need to restart
>> hadoop
>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>> This will set the heap to 10gb
>> You can also add this in start of hadoop-env.sh file
>>
>>
>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>
>>> i have tried both of the methods you side, but the problem still exists.
>>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>> maybe has no effect. I have looked for this parameter in the document of
>>> hadoop, but did not found it.
>>>
>>>
>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>> dwivedishashwat@gmail.com> wrote:
>>>
>>>> Check this out
>>>>
>>>>
>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>
>>>>
>>>>
>>>> * Warm Regards_**∞_*
>>>> * Shashwat Shriparv*
>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>> message list as below
>>>>>
>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>
>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>> "mapred-site.xml",
>>>>>   <property>
>>>>>         <name>mapred.child.java.opts</name>
>>>>>         <value>-Xmx1024m</value>
>>>>>   </property>
>>>>> then another error occurs as below
>>>>>
>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>> Container
>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>> Container
>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>
>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>>
>>>>> at last, the task failed.
>>>>> Thanks for any help!
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>> Junior Developer
>>
>> http://www.unmeshasreeveni.blogspot.in/
>>
>>
>>
>

Re: GC overhead limit exceeded

Posted by divye sheth <di...@gmail.com>.
Hi Haihong,

Please check out the link below, I believe it should solve your problem.

http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

Thanks
Divye Sheth


On Wed, Mar 12, 2014 at 11:33 AM, haihong lu <un...@gmail.com> wrote:

> Thanks, even if i had added this parameter, but had no effect.
>
>
> On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:
>
>> Try to increase the memory for datanode and see.This need to restart
>> hadoop
>> export HADOOP_DATANODE_OPTS="-Xmx10g"
>> This will set the heap to 10gb
>> You can also add this in start of hadoop-env.sh file
>>
>>
>> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>>
>>> i have tried both of the methods you side, but the problem still exists.
>>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>>> maybe has no effect. I have looked for this parameter in the document of
>>> hadoop, but did not found it.
>>>
>>>
>>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>>> dwivedishashwat@gmail.com> wrote:
>>>
>>>> Check this out
>>>>
>>>>
>>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>>
>>>>
>>>>
>>>> * Warm Regards_**∞_*
>>>> * Shashwat Shriparv*
>>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>>
>>>>> Hi:
>>>>>
>>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>>> message list as below
>>>>>
>>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>>> Error: Java heap space
>>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>>> Error: GC overhead limit exceeded
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>>
>>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>>> "mapred-site.xml",
>>>>>   <property>
>>>>>         <name>mapred.child.java.opts</name>
>>>>>         <value>-Xmx1024m</value>
>>>>>   </property>
>>>>> then another error occurs as below
>>>>>
>>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>>> Container
>>>>> [pid=5592,containerID=container_1394160253524_0003_01_000004] is running
>>>>> beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>>
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>>> Container
>>>>> [pid=5182,containerID=container_1394160253524_0003_01_000003] is running
>>>>> beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical
>>>>> memory used; 2.7 GB of
>>>>>
>>>>> 2.1 GB virtual memory used. Killing container.
>>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>>
>>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>>
>>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>>
>>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>>
>>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>>> -
>>>>>
>>>>> Dlog4j.configuration=container-log4j.properties
>>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>>
>>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>>
>>>>> Container killed on request. Exit code is 143
>>>>>
>>>>> at last, the task failed.
>>>>> Thanks for any help!
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Thanks & Regards*
>>
>> Unmesha Sreeveni U.B
>> Junior Developer
>>
>> http://www.unmeshasreeveni.blogspot.in/
>>
>>
>>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks, even if i had added this parameter, but had no effect.


On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:

> Try to increase the memory for datanode and see.This need to restart hadoop
> export HADOOP_DATANODE_OPTS="-Xmx10g"
> This will set the heap to 10gb
> You can also add this in start of hadoop-env.sh file
>
>
> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>
>> i have tried both of the methods you side, but the problem still exists.
>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>> maybe has no effect. I have looked for this parameter in the document of
>> hadoop, but did not found it.
>>
>>
>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>> dwivedishashwat@gmail.com> wrote:
>>
>>> Check this out
>>>
>>>
>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>
>>>
>>>
>>> * Warm Regards_**∞_*
>>> * Shashwat Shriparv*
>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>
>>>
>>>
>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> Hi:
>>>>
>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>> message list as below
>>>>
>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>
>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>> "mapred-site.xml",
>>>>   <property>
>>>>         <name>mapred.child.java.opts</name>
>>>>         <value>-Xmx1024m</value>
>>>>   </property>
>>>> then another error occurs as below
>>>>
>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>
>>>>
>>>> Container killed on request. Exit code is 143
>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>
>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>> Container killed on request. Exit code is 143
>>>>
>>>> at last, the task failed.
>>>> Thanks for any help!
>>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
> Junior Developer
>
> http://www.unmeshasreeveni.blogspot.in/
>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks, even if i had added this parameter, but had no effect.


On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:

> Try to increase the memory for datanode and see.This need to restart hadoop
> export HADOOP_DATANODE_OPTS="-Xmx10g"
> This will set the heap to 10gb
> You can also add this in start of hadoop-env.sh file
>
>
> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>
>> i have tried both of the methods you side, but the problem still exists.
>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>> maybe has no effect. I have looked for this parameter in the document of
>> hadoop, but did not found it.
>>
>>
>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>> dwivedishashwat@gmail.com> wrote:
>>
>>> Check this out
>>>
>>>
>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>
>>>
>>>
>>> * Warm Regards_**∞_*
>>> * Shashwat Shriparv*
>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>
>>>
>>>
>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> Hi:
>>>>
>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>> message list as below
>>>>
>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>
>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>> "mapred-site.xml",
>>>>   <property>
>>>>         <name>mapred.child.java.opts</name>
>>>>         <value>-Xmx1024m</value>
>>>>   </property>
>>>> then another error occurs as below
>>>>
>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>
>>>>
>>>> Container killed on request. Exit code is 143
>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>
>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>> Container killed on request. Exit code is 143
>>>>
>>>> at last, the task failed.
>>>> Thanks for any help!
>>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
> Junior Developer
>
> http://www.unmeshasreeveni.blogspot.in/
>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks, even if i had added this parameter, but had no effect.


On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:

> Try to increase the memory for datanode and see.This need to restart hadoop
> export HADOOP_DATANODE_OPTS="-Xmx10g"
> This will set the heap to 10gb
> You can also add this in start of hadoop-env.sh file
>
>
> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>
>> i have tried both of the methods you side, but the problem still exists.
>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>> maybe has no effect. I have looked for this parameter in the document of
>> hadoop, but did not found it.
>>
>>
>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>> dwivedishashwat@gmail.com> wrote:
>>
>>> Check this out
>>>
>>>
>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>
>>>
>>>
>>> * Warm Regards_**∞_*
>>> * Shashwat Shriparv*
>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>
>>>
>>>
>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> Hi:
>>>>
>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>> message list as below
>>>>
>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>
>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>> "mapred-site.xml",
>>>>   <property>
>>>>         <name>mapred.child.java.opts</name>
>>>>         <value>-Xmx1024m</value>
>>>>   </property>
>>>> then another error occurs as below
>>>>
>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>
>>>>
>>>> Container killed on request. Exit code is 143
>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>
>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>> Container killed on request. Exit code is 143
>>>>
>>>> at last, the task failed.
>>>> Thanks for any help!
>>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
> Junior Developer
>
> http://www.unmeshasreeveni.blogspot.in/
>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
Thanks, even if i had added this parameter, but had no effect.


On Tue, Mar 11, 2014 at 12:11 PM, unmesha sreeveni <un...@gmail.com>wrote:

> Try to increase the memory for datanode and see.This need to restart hadoop
> export HADOOP_DATANODE_OPTS="-Xmx10g"
> This will set the heap to 10gb
> You can also add this in start of hadoop-env.sh file
>
>
> On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:
>
>> i have tried both of the methods you side, but the problem still exists.
>> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
>> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml
>> maybe has no effect. I have looked for this parameter in the document of
>> hadoop, but did not found it.
>>
>>
>> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
>> dwivedishashwat@gmail.com> wrote:
>>
>>> Check this out
>>>
>>>
>>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>>
>>>
>>>
>>> * Warm Regards_**∞_*
>>> * Shashwat Shriparv*
>>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>>
>>>
>>>
>>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>>
>>>> Hi:
>>>>
>>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>>> message list as below
>>>>
>>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>>> Error: Java heap space
>>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>>> Error: GC overhead limit exceeded
>>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>>
>>>> and then i add a parameter "mapred.child.java.opts" to the file
>>>> "mapred-site.xml",
>>>>   <property>
>>>>         <name>mapred.child.java.opts</name>
>>>>         <value>-Xmx1024m</value>
>>>>   </property>
>>>> then another error occurs as below
>>>>
>>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000002_0 4
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>>
>>>>
>>>> Container killed on request. Exit code is 143
>>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>>> physical memory used; 2.7 GB of
>>>>
>>>> 2.1 GB virtual memory used. Killing container.
>>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>>
>>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>>
>>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>>
>>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>>
>>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>>> -
>>>>
>>>> Dlog4j.configuration=container-log4j.properties
>>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>>
>>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>>> attempt_1394160253524_0003_m_000001_0 3
>>>>
>>>> Container killed on request. Exit code is 143
>>>>
>>>> at last, the task failed.
>>>> Thanks for any help!
>>>>
>>>
>>>
>>
>
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
> Junior Developer
>
> http://www.unmeshasreeveni.blogspot.in/
>
>
>

Re: GC overhead limit exceeded

Posted by unmesha sreeveni <un...@gmail.com>.
Try to increase the memory for datanode and see.This need to restart hadoop
export HADOOP_DATANODE_OPTS="-Xmx10g"
This will set the heap to 10gb
You can also add this in start of hadoop-env.sh file


On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:

> i have tried both of the methods you side, but the problem still exists.
> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
> has no effect. I have looked for this parameter in the document of hadoop,
> but did not found it.
>
>
> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> Check this out
>>
>>
>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>
>>
>>
>> * Warm Regards_**∞_*
>> * Shashwat Shriparv*
>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>
>>
>>
>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>
>>> Hi:
>>>
>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>> message list as below
>>>
>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>
>>> and then i add a parameter "mapred.child.java.opts" to the file
>>> "mapred-site.xml",
>>>   <property>
>>>         <name>mapred.child.java.opts</name>
>>>         <value>-Xmx1024m</value>
>>>   </property>
>>> then another error occurs as below
>>>
>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>
>>>
>>> Container killed on request. Exit code is 143
>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>
>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>> Container killed on request. Exit code is 143
>>>
>>> at last, the task failed.
>>> Thanks for any help!
>>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B
Junior Developer

http://www.unmeshasreeveni.blogspot.in/

Re: GC overhead limit exceeded

Posted by unmesha sreeveni <un...@gmail.com>.
Try to increase the memory for datanode and see.This need to restart hadoop
export HADOOP_DATANODE_OPTS="-Xmx10g"
This will set the heap to 10gb
You can also add this in start of hadoop-env.sh file


On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:

> i have tried both of the methods you side, but the problem still exists.
> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
> has no effect. I have looked for this parameter in the document of hadoop,
> but did not found it.
>
>
> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> Check this out
>>
>>
>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>
>>
>>
>> * Warm Regards_**∞_*
>> * Shashwat Shriparv*
>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>
>>
>>
>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>
>>> Hi:
>>>
>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>> message list as below
>>>
>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>
>>> and then i add a parameter "mapred.child.java.opts" to the file
>>> "mapred-site.xml",
>>>   <property>
>>>         <name>mapred.child.java.opts</name>
>>>         <value>-Xmx1024m</value>
>>>   </property>
>>> then another error occurs as below
>>>
>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>
>>>
>>> Container killed on request. Exit code is 143
>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>
>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>> Container killed on request. Exit code is 143
>>>
>>> at last, the task failed.
>>> Thanks for any help!
>>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B
Junior Developer

http://www.unmeshasreeveni.blogspot.in/

Re: GC overhead limit exceeded

Posted by unmesha sreeveni <un...@gmail.com>.
Try to increase the memory for datanode and see.This need to restart hadoop
export HADOOP_DATANODE_OPTS="-Xmx10g"
This will set the heap to 10gb
You can also add this in start of hadoop-env.sh file


On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:

> i have tried both of the methods you side, but the problem still exists.
> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
> has no effect. I have looked for this parameter in the document of hadoop,
> but did not found it.
>
>
> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> Check this out
>>
>>
>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>
>>
>>
>> * Warm Regards_**∞_*
>> * Shashwat Shriparv*
>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>
>>
>>
>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>
>>> Hi:
>>>
>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>> message list as below
>>>
>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>
>>> and then i add a parameter "mapred.child.java.opts" to the file
>>> "mapred-site.xml",
>>>   <property>
>>>         <name>mapred.child.java.opts</name>
>>>         <value>-Xmx1024m</value>
>>>   </property>
>>> then another error occurs as below
>>>
>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>
>>>
>>> Container killed on request. Exit code is 143
>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>
>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>> Container killed on request. Exit code is 143
>>>
>>> at last, the task failed.
>>> Thanks for any help!
>>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B
Junior Developer

http://www.unmeshasreeveni.blogspot.in/

Re: GC overhead limit exceeded

Posted by unmesha sreeveni <un...@gmail.com>.
Try to increase the memory for datanode and see.This need to restart hadoop
export HADOOP_DATANODE_OPTS="-Xmx10g"
This will set the heap to 10gb
You can also add this in start of hadoop-env.sh file


On Tue, Mar 11, 2014 at 9:02 AM, haihong lu <un...@gmail.com> wrote:

> i have tried both of the methods you side, but the problem still exists.
> Thanks all the same. by the way, my hadoop version is 2.2.0, so the
> parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
> has no effect. I have looked for this parameter in the document of hadoop,
> but did not found it.
>
>
> On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <
> dwivedishashwat@gmail.com> wrote:
>
>> Check this out
>>
>>
>> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>>
>>
>>
>> * Warm Regards_**∞_*
>> * Shashwat Shriparv*
>>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
>> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
>> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
>> http://google.com/+ShashwatShriparv]<http://google.com/+ShashwatShriparv>[image:
>> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
>> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/]<sh...@yahoo.com>
>>
>>
>>
>> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>>
>>> Hi:
>>>
>>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>>> message list as below
>>>
>>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>>> Error: Java heap space
>>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>>> Error: GC overhead limit exceeded
>>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>>
>>> and then i add a parameter "mapred.child.java.opts" to the file
>>> "mapred-site.xml",
>>>   <property>
>>>         <name>mapred.child.java.opts</name>
>>>         <value>-Xmx1024m</value>
>>>   </property>
>>> then another error occurs as below
>>>
>>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000002_0 4
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>>
>>>
>>> Container killed on request. Exit code is 143
>>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>>> physical memory used; 2.7 GB of
>>>
>>> 2.1 GB virtual memory used. Killing container.
>>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>>
>>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>>
>>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>>
>>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>>
>>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>>> -
>>>
>>> Dlog4j.configuration=container-log4j.properties
>>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>>
>>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>>> attempt_1394160253524_0003_m_000001_0 3
>>>
>>> Container killed on request. Exit code is 143
>>>
>>> at last, the task failed.
>>> Thanks for any help!
>>>
>>
>>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B
Junior Developer

http://www.unmeshasreeveni.blogspot.in/

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
i have tried both of the methods you side, but the problem still exists.
Thanks all the same. by the way, my hadoop version is 2.2.0, so the
parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
has no effect. I have looked for this parameter in the document of hadoop,
but did not found it.


On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> Check this out
>
>
> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>
>
>
> * Warm Regards_**∞_*
> * Shashwat Shriparv*
>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
> http://google.com/+ShashwatShriparv] <http://google.com/+ShashwatShriparv>[image:
> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>
>
>
>
> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>
>> Hi:
>>
>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>> message list as below
>>
>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>
>> and then i add a parameter "mapred.child.java.opts" to the file
>> "mapred-site.xml",
>>   <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx1024m</value>
>>   </property>
>> then another error occurs as below
>>
>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>
>>
>> Container killed on request. Exit code is 143
>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>
>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>> Container killed on request. Exit code is 143
>>
>> at last, the task failed.
>> Thanks for any help!
>>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
i have tried both of the methods you side, but the problem still exists.
Thanks all the same. by the way, my hadoop version is 2.2.0, so the
parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
has no effect. I have looked for this parameter in the document of hadoop,
but did not found it.


On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> Check this out
>
>
> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>
>
>
> * Warm Regards_**∞_*
> * Shashwat Shriparv*
>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
> http://google.com/+ShashwatShriparv] <http://google.com/+ShashwatShriparv>[image:
> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>
>
>
>
> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>
>> Hi:
>>
>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>> message list as below
>>
>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>
>> and then i add a parameter "mapred.child.java.opts" to the file
>> "mapred-site.xml",
>>   <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx1024m</value>
>>   </property>
>> then another error occurs as below
>>
>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>
>>
>> Container killed on request. Exit code is 143
>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>
>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>> Container killed on request. Exit code is 143
>>
>> at last, the task failed.
>> Thanks for any help!
>>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
i have tried both of the methods you side, but the problem still exists.
Thanks all the same. by the way, my hadoop version is 2.2.0, so the
parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
has no effect. I have looked for this parameter in the document of hadoop,
but did not found it.


On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> Check this out
>
>
> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>
>
>
> * Warm Regards_**∞_*
> * Shashwat Shriparv*
>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
> http://google.com/+ShashwatShriparv] <http://google.com/+ShashwatShriparv>[image:
> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>
>
>
>
> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>
>> Hi:
>>
>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>> message list as below
>>
>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>
>> and then i add a parameter "mapred.child.java.opts" to the file
>> "mapred-site.xml",
>>   <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx1024m</value>
>>   </property>
>> then another error occurs as below
>>
>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>
>>
>> Container killed on request. Exit code is 143
>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>
>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>> Container killed on request. Exit code is 143
>>
>> at last, the task failed.
>> Thanks for any help!
>>
>
>

Re: GC overhead limit exceeded

Posted by haihong lu <un...@gmail.com>.
i have tried both of the methods you side, but the problem still exists.
Thanks all the same. by the way, my hadoop version is 2.2.0, so the
parameter  "mapreduce.map.memory.mb =3072" added to mapred-site.xml maybe
has no effect. I have looked for this parameter in the document of hadoop,
but did not found it.


On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv <dwivedishashwat@gmail.com
> wrote:

> Check this out
>
>
> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded
>
>
>
> * Warm Regards_**∞_*
> * Shashwat Shriparv*
>  [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
> http://google.com/+ShashwatShriparv] <http://google.com/+ShashwatShriparv>[image:
> http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>
>
>
>
> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:
>
>> Hi:
>>
>>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
>> message list as below
>>
>> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
>>  14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000020_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000008_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000015_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000023_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000026_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000019_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000007_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000000_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000021_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000029_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000010_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000018_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000014_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000028_0, Status : FAILED
>> Error: Java heap space
>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000002_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000005_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000006_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000027_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000009_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000017_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000022_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000001_0, Status : FAILED
>> Error: GC overhead limit exceeded
>> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>>
>> and then i add a parameter "mapred.child.java.opts" to the file
>> "mapred-site.xml",
>>   <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx1024m</value>
>>   </property>
>> then another error occurs as below
>>
>> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000002_0, Status : FAILED
>> Container [pid=5592,containerID=container_1394160253524_0003_01_000004]
>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000004 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000002_0 4
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>>
>>
>> Container killed on request. Exit code is 143
>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
>> attempt_1394160253524_0003_m_000001_0, Status : FAILED
>> Container [pid=5182,containerID=container_1394160253524_0003_01_000003]
>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
>> physical memory used; 2.7 GB of
>>
>> 2.1 GB virtual memory used. Killing container.
>> Dump of the process-tree for container_1394160253524_0003_01_000003 :
>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>>
>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>>
>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>>
>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>>
>> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
>> -
>>
>> Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>>
>> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
>> attempt_1394160253524_0003_m_000001_0 3
>>
>> Container killed on request. Exit code is 143
>>
>> at last, the task failed.
>> Thanks for any help!
>>
>
>

Re: GC overhead limit exceeded

Posted by shashwat shriparv <dw...@gmail.com>.
Check this out

http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded



* Warm Regards_**∞_*
* Shashwat Shriparv*
 [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
http://google.com/+ShashwatShriparv]
<http://google.com/+ShashwatShriparv>[image:
http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>



On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:

> Hi:
>
>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
> message list as below
>
> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000020_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000008_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000015_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000023_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000026_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000019_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000007_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000000_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000021_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000029_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000010_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000018_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000014_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000028_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000002_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000005_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000006_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000027_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000009_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000017_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000022_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000001_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>
> and then i add a parameter "mapred.child.java.opts" to the file
> "mapred-site.xml",
>   <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx1024m</value>
>   </property>
> then another error occurs as below
>
> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000002_0, Status : FAILED
> Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
> running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000004 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>
>
> Container killed on request. Exit code is 143
> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000001_0, Status : FAILED
> Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
> running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000003 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>
> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
> Container killed on request. Exit code is 143
>
> at last, the task failed.
> Thanks for any help!
>

Re: GC overhead limit exceeded

Posted by shashwat shriparv <dw...@gmail.com>.
Check this out

http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded



* Warm Regards_**∞_*
* Shashwat Shriparv*
 [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
http://google.com/+ShashwatShriparv]
<http://google.com/+ShashwatShriparv>[image:
http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>



On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:

> Hi:
>
>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
> message list as below
>
> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000020_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000008_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000015_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000023_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000026_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000019_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000007_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000000_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000021_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000029_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000010_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000018_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000014_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000028_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000002_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000005_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000006_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000027_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000009_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000017_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000022_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000001_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>
> and then i add a parameter "mapred.child.java.opts" to the file
> "mapred-site.xml",
>   <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx1024m</value>
>   </property>
> then another error occurs as below
>
> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000002_0, Status : FAILED
> Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
> running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000004 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>
>
> Container killed on request. Exit code is 143
> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000001_0, Status : FAILED
> Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
> running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000003 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>
> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
> Container killed on request. Exit code is 143
>
> at last, the task failed.
> Thanks for any help!
>

Re: GC overhead limit exceeded

Posted by shashwat shriparv <dw...@gmail.com>.
Check this out

http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded



* Warm Regards_**∞_*
* Shashwat Shriparv*
 [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
http://google.com/+ShashwatShriparv]
<http://google.com/+ShashwatShriparv>[image:
http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>



On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:

> Hi:
>
>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
> message list as below
>
> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000020_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000008_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000015_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000023_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000026_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000019_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000007_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000000_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000021_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000029_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000010_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000018_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000014_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000028_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000002_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000005_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000006_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000027_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000009_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000017_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000022_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000001_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>
> and then i add a parameter "mapred.child.java.opts" to the file
> "mapred-site.xml",
>   <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx1024m</value>
>   </property>
> then another error occurs as below
>
> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000002_0, Status : FAILED
> Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
> running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000004 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>
>
> Container killed on request. Exit code is 143
> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000001_0, Status : FAILED
> Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
> running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000003 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>
> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
> Container killed on request. Exit code is 143
>
> at last, the task failed.
> Thanks for any help!
>

答复: GC overhead limit exceeded

Posted by 梁李印 <li...@aliyun-inc.com>.
By default, if your mapred.child.java.opts=-Xmx1024m, the memory limit for
your task container is 2GB. If the memory your map used is more than 2GB,
you map container will be killed by NodeManager.

You can and a  parameter mapreduce.map.memory.mb =3072(3GB) to try to fix
this problem.

 

Liyin Liang

发件人: haihong lu [mailto:ung3210@gmail.com] 
发送时间: 2014年3月7日 14:34
收件人: user@hadoop.apache.org
主题: GC overhead limit exceeded

 

Hi:

 

     i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

 

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

 

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml", 

  <property>

        <name>mapred.child.java.opts</name>

        <value>-Xmx1024m</value>

  </property>

then another error occurs as below

 

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED

Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000004 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

       |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stderr  

 

Container killed on request. Exit code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED

Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000003 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stderr  

       |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

Container killed on request. Exit code is 143

 

at last, the task failed. 

Thanks for any help!


Re: GC overhead limit exceeded

Posted by shashwat shriparv <dw...@gmail.com>.
Check this out

http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-java-lang-OutOfMemoryError-GC-overhead-limit-exceeded



* Warm Regards_**∞_*
* Shashwat Shriparv*
 [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]<http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
http://google.com/+ShashwatShriparv]
<http://google.com/+ShashwatShriparv>[image:
http://www.youtube.com/user/sShriparv/videos]<http://www.youtube.com/user/sShriparv/videos>[image:
http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <sh...@yahoo.com>



On Fri, Mar 7, 2014 at 12:04 PM, haihong lu <un...@gmail.com> wrote:

> Hi:
>
>      i have a problem when run Hibench with hadoop-2.2.0, the wrong
> message list as below
>
> 14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%
> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000020_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000008_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000015_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000023_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000026_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%
> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000019_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%
> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000007_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000000_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000021_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%
> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000029_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%
> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000010_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000018_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000014_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000028_0, Status : FAILED
> Error: Java heap space
> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000002_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%
> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000005_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%
> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000006_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000027_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%
> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000009_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000017_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000022_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%
> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000001_0, Status : FAILED
> Error: GC overhead limit exceeded
> 14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%
> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0010_m_000024_0, Status : FAILED
>
> and then i add a parameter "mapred.child.java.opts" to the file
> "mapred-site.xml",
>   <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx1024m</value>
>   </property>
> then another error occurs as below
>
> 14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%
> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000002_0, Status : FAILED
> Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
> running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000004 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000002_0 4
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr
>
>
> Container killed on request. Exit code is 143
> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
> attempt_1394160253524_0003_m_000001_0, Status : FAILED
> Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
> running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
> physical memory used; 2.7 GB of
>
> 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1394160253524_0003_01_000003 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>  |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
>
> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout
>
> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr
>
> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx2048m -
>
> Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp
> -
>
> Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003
>
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
> attempt_1394160253524_0003_m_000001_0 3
>
> Container killed on request. Exit code is 143
>
> at last, the task failed.
> Thanks for any help!
>

答复: GC overhead limit exceeded

Posted by 梁李印 <li...@aliyun-inc.com>.
By default, if your mapred.child.java.opts=-Xmx1024m, the memory limit for
your task container is 2GB. If the memory your map used is more than 2GB,
you map container will be killed by NodeManager.

You can and a  parameter mapreduce.map.memory.mb =3072(3GB) to try to fix
this problem.

 

Liyin Liang

发件人: haihong lu [mailto:ung3210@gmail.com] 
发送时间: 2014年3月7日 14:34
收件人: user@hadoop.apache.org
主题: GC overhead limit exceeded

 

Hi:

 

     i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

 

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

 

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml", 

  <property>

        <name>mapred.child.java.opts</name>

        <value>-Xmx1024m</value>

  </property>

then another error occurs as below

 

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED

Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000004 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

       |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stderr  

 

Container killed on request. Exit code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED

Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000003 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stderr  

       |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

Container killed on request. Exit code is 143

 

at last, the task failed. 

Thanks for any help!


答复: GC overhead limit exceeded

Posted by 梁李印 <li...@aliyun-inc.com>.
By default, if your mapred.child.java.opts=-Xmx1024m, the memory limit for
your task container is 2GB. If the memory your map used is more than 2GB,
you map container will be killed by NodeManager.

You can and a  parameter mapreduce.map.memory.mb =3072(3GB) to try to fix
this problem.

 

Liyin Liang

发件人: haihong lu [mailto:ung3210@gmail.com] 
发送时间: 2014年3月7日 14:34
收件人: user@hadoop.apache.org
主题: GC overhead limit exceeded

 

Hi:

 

     i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

 

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

 

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml", 

  <property>

        <name>mapred.child.java.opts</name>

        <value>-Xmx1024m</value>

  </property>

then another error occurs as below

 

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED

Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000004 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

       |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stderr  

 

Container killed on request. Exit code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED

Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000003 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stderr  

       |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

Container killed on request. Exit code is 143

 

at last, the task failed. 

Thanks for any help!


答复: GC overhead limit exceeded

Posted by 梁李印 <li...@aliyun-inc.com>.
By default, if your mapred.child.java.opts=-Xmx1024m, the memory limit for
your task container is 2GB. If the memory your map used is more than 2GB,
you map container will be killed by NodeManager.

You can and a  parameter mapreduce.map.memory.mb =3072(3GB) to try to fix
this problem.

 

Liyin Liang

发件人: haihong lu [mailto:ung3210@gmail.com] 
发送时间: 2014年3月7日 14:34
收件人: user@hadoop.apache.org
主题: GC overhead limit exceeded

 

Hi:

 

     i have a problem when run Hibench with hadoop-2.2.0, the wrong message
list as below

 

14/03/07 13:54:53 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000020_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000008_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000015_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000023_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000026_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000019_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000007_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000000_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000021_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:24 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000029_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000010_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000018_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000014_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000028_0, Status : FAILED

Error: Java heap space

14/03/07 14:02:50 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000002_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000005_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000006_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000027_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000009_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000017_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000022_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000001_0, Status : FAILED

Error: GC overhead limit exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0010_m_000024_0, Status : FAILED

 

and then i add a parameter "mapred.child.java.opts" to the file
"mapred-site.xml", 

  <property>

        <name>mapred.child.java.opts</name>

        <value>-Xmx1024m</value>

  </property>

then another error occurs as below

 

14/03/07 11:21:51 INFO mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000002_0, Status : FAILED

Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is
running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000004 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

       |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000002_0 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000004/stderr  

 

Container killed on request. Exit code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id :
attempt_1394160253524_0003_m_000001_0, Status : FAILED

Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is
running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB
physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing container.

Dump of the process-tree for container_1394160253524_0003_01_000003 :

       |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE

       |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c
/usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139
4160253524_0003_01_000003/stderr  

       |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.
7.0_45/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -

 

Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/appl
ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp -

 

Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_13941
60253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837
attempt_1394160253524_0003_m_000001_0 3 

 

Container killed on request. Exit code is 143

 

at last, the task failed. 

Thanks for any help!