You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by JR Wang <je...@163.com> on 2016/04/29 04:09:50 UTC

Yarn container use huge virtual memory with JVM option -XX:+PrintGCDetails added.

Hi All,

I’m currently running Hadoop 2.7.2 on my three node cluster, all node equiped with 32-core CPU and 64 GB memory, Ubuntu 14.04.3 LTS. Yarn’s configuration kept the same with default one, except yarn.nodemanager.resource.memory-mb set as 16384(16G).

Everything worked fine, but when I try to run Map/Reduce task with JVM options mapreduce.map.java.opts=-XX:+PrintGCDetails, the task failed, with error message said that container is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.

Actually I was just running the Map/Reduce expample pi with tiny calculation, hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi -D mapreduce.map.java.opts=-XX:+PrintGCDetails 5 100.

I’ve tried to disable yarn.nodemanager.vmem-check-enabled, but as calculation increasees, error happens again.

I am really confused why huge virtual memory allcated when JVM option -XX:+PrintGCDetails was added. Please help!

Thank You,
JR

Error Message

16/04/29 09:43:47 INFO mapreduce.Job: Task Id : attempt_1461846504106_0012_m_000004_1, Status : FAILED
Container [pid=16629,containerID=container_1461846504106_0012_01_000010] is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1461846504106_0012_01_000010 :
  |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
  |- 16629 16627 16629 16629 (bash) 0 0 17051648 667 /bin/bash -c /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10 1>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stdout 2>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stderr  
  |- 16633 16629 16629 16629 (java) 399 25 19239059456 53616 /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10

yarn-site.xml

<configuration><!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>namenode:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>namenode:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>namenode:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>namenode:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>namenode:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/local</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/logs</value>
    </property>  
    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
    </property>
    <property>
        <!-- Where to aggregate logs to. -->
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>${hadoop.tmp.dir}/nodemanager/remote</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
        <value>logs</value>
    </property>
    <property>
        <!-- enable log aggregation -->
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <!-- How long to keep aggregation logs before deleting them. -1 disables. -->
        <name>yarn.log-aggregation.retain-seconds</name>
        <!-- 30 days -->
        <value>2592000</value>
    </property>
    <property>
        <!-- How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time.  -->
        <name>yarn.log-aggregation.retain-check-interval-seconds</name>
        <value>-1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>16384</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>true</value>
    </property></configuration>

Re:Re: Yarn container use huge virtual memory with JVM option -XX:+PrintGCDetails added.

Posted by JR Wang <je...@163.com>.
Hi,


I did enabled the vmem check. But what I really want to know is that why huge virtual memory(17.9 GB) used when JVM option -XX:+PrintGCDetails was added.


The appliction which I run was hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi -D mapreduce.map.java.opts=-XX:+PrintGCDetails 5 100.


If I remove the JVM options -XX:+PrintGCDetails, the application complete normally.


Thanks
JR




在2016年05月01 16时19分, "Varun Vasudev"<vv...@apache.org>写道:


Hello!


From the attached yarn-site.xml it looks like the vmem check is enabled. The value should be set to false and the nodemanager restarted.


   <property>        <name>yarn.nodemanager.vmem-check-enabled</name>        <value>true</value>    </property>


-Varun


From: JR Wang <je...@163.com>
Date: Friday, April 29, 2016 at 7:39 AM
To: user <us...@hadoop.apache.org>
Subject: Yarn container use huge virtual memory with JVM option -XX:+PrintGCDetails added.




Hi All,

I’m currently running Hadoop 2.7.2 on my three node cluster, all node equiped with 32-core CPU and 64 GB memory, Ubuntu 14.04.3 LTS. Yarn’s configuration kept the same with default one, except yarn.nodemanager.resource.memory-mb set as 16384(16G).

Everything worked fine, but when I try to run Map/Reduce task with JVM options mapreduce.map.java.opts=-XX:+PrintGCDetails, the task failed, with error message said that container is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.

Actually I was just running the Map/Reduce expample pi with tiny calculation, hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi -D mapreduce.map.java.opts=-XX:+PrintGCDetails 5 100.

I’ve tried to disable yarn.nodemanager.vmem-check-enabled, but as calculation increasees, error happens again.

I am really confused why huge virtual memory allcated when JVM option -XX:+PrintGCDetails was added. Please help!

Thank You,
JR

Error Message

16/04/29 09:43:47 INFO mapreduce.Job: Task Id : attempt_1461846504106_0012_m_000004_1, Status : FAILED
Container [pid=16629,containerID=container_1461846504106_0012_01_000010] is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1461846504106_0012_01_000010 :
  |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
  |- 16629 16627 16629 16629 (bash) 0 0 17051648 667 /bin/bash -c /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10 1>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stdout 2>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stderr  
  |- 16633 16629 16629 16629 (java) 399 25 19239059456 53616 /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10

yarn-site.xml

<configuration><!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>namenode:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>namenode:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>namenode:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>namenode:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>namenode:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/local</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/logs</value>
    </property>  
    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
    </property>
    <property>
        <!-- Where to aggregate logs to. -->
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>${hadoop.tmp.dir}/nodemanager/remote</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
        <value>logs</value>
    </property>
    <property>
        <!-- enable log aggregation -->
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <!-- How long to keep aggregation logs before deleting them. -1 disables. -->
        <name>yarn.log-aggregation.retain-seconds</name>
        <!-- 30 days -->
        <value>2592000</value>
    </property>
    <property>
        <!-- How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time.  -->
        <name>yarn.log-aggregation.retain-check-interval-seconds</name>
        <value>-1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>16384</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>true</value>
    </property></configuration>

Re: Yarn container use huge virtual memory with JVM option -XX:+PrintGCDetails added.

Posted by Varun Vasudev <vv...@apache.org>.
Hello!

From the attached yarn-site.xml it looks like the vmem check is enabled. The value should be set to false and the nodemanager restarted.

    <property>        <name>yarn.nodemanager.vmem-check-enabled</name>        <value>true</value>    </property>

-Varun

From:  JR Wang <je...@163.com>
Date:  Friday, April 29, 2016 at 7:39 AM
To:  user <us...@hadoop.apache.org>
Subject:  Yarn container use huge virtual memory with JVM option -XX:+PrintGCDetails added.

Hi All,

I’m currently running Hadoop 2.7.2 on my three node cluster, all node equiped with 32-core CPU and 64 GB memory, Ubuntu 14.04.3 LTS. Yarn’s configuration kept the same with default one, except yarn.nodemanager.resource.memory-mb set as 16384(16G).

Everything worked fine, but when I try to run Map/Reduce task with JVM options mapreduce.map.java.opts=-XX:+PrintGCDetails, the task failed, with error message said that container is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.

Actually I was just running the Map/Reduce expample pi with tiny calculation, hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi -D mapreduce.map.java.opts=-XX:+PrintGCDetails 5 100.

I’ve tried to disable yarn.nodemanager.vmem-check-enabled, but as calculation increasees, error happens again.

I am really confused why huge virtual memory allcated when JVM option -XX:+PrintGCDetails was added. Please help!

Thank You,
JR

Error Message
16/04/29 09:43:47 INFO mapreduce.Job: Task Id : attempt_1461846504106_0012_m_000004_1, Status : FAILED
Container [pid=16629,containerID=container_1461846504106_0012_01_000010] is running beyond virtual memory limits. Current usage: 212.0 MB of 1 GB physical memory used; 17.9 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1461846504106_0012_01_000010 :
  |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
  |- 16629 16627 16629 16629 (bash) 0 0 17051648 667 /bin/bash -c /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10 1>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stdout 2>/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010/stderr  
  |- 16633 16629 16629 16629 (java) 399 25 19239059456 53616 /usr/lib/jvm/jdk8/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -XX:+PrintGCDetails -Djava.io.tmpdir=/opt/hadoop-2.7.2/tmp/nodemanager/local/usercache/hadoop/appcache/application_1461846504106_0012/container_1461846504106_0012_01_000010/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/hadoop-2.7.2/tmp/nodemanager/logs/application_1461846504106_0012/container_1461846504106_0012_01_000010 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 10.13.71.82 56755 attempt_1461846504106_0012_m_000004_1 10
yarn-site.xml
<configuration><!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>namenode:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>namenode:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>namenode:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>namenode:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>namenode:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/local</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>${hadoop.tmp.dir}/nodemanager/logs</value>
    </property>  
    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
    </property>
    <property>
        <!-- Where to aggregate logs to. -->
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>${hadoop.tmp.dir}/nodemanager/remote</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
        <value>logs</value>
    </property>
    <property>
        <!-- enable log aggregation -->
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <!-- How long to keep aggregation logs before deleting them. -1 disables. -->
        <name>yarn.log-aggregation.retain-seconds</name>
        <!-- 30 days -->
        <value>2592000</value>
    </property>
    <property>
        <!-- How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time.  -->
        <name>yarn.log-aggregation.retain-check-interval-seconds</name>
        <value>-1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>16384</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>true</value>
    </property></configuration>