You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "S.L" <si...@gmail.com> on 2014/01/02 04:50:52 UTC

Unable to change the virtual memory to be more than the default 2.1 GB

Hello Folks,

I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM.

Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.

I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
to 10 , however the virtual memory is not getting increased more than 2.1
GB , as can been seen in the error message below and the container is being
killed.

Can some one please let me know if there is any other setting that needs to
be changed ? Thanks in advance!

*Error Message :*

INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status
: FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is
running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
/usr/local/bin/phantomjs --webdriver=15358
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr

    |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
/usr/local/bin/phantomjs --webdriver=29062
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
/usr/local/bin/phantomjs --webdriver=5958
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
/usr/local/bin/phantomjs --webdriver=31836
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
/usr/local/bin/phantomjs --webdriver=24519
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
/usr/local/bin/phantomjs --webdriver=10175
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
/usr/local/bin/phantomjs --webdriver=5043
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
    |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
/usr/local/bin/phantomjs --webdriver=12650
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log

    |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
/usr/local/bin/phantomjs --webdriver=18444
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log


Container killed on request. Exit code is 143

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Hi German, Thanks for your reply!

a) Yes setting the property yarn.nodemanager.vmem-check-enabled to false seems
to have avoid the problem.

b) I woud want to set the pmem/vmem ratio to a higher value and keep the
virtual memory with in certain limits but , changing this value is not
having any effect on the Hadoop2.2 YARN .

c) Why would virtual memory increase and the physical memory stay the same
, what might be the causes that would make this happen in YARN ?

Thanks.


On Thu, Jan 2, 2014 at 11:18 AM, German Florez-Larrahondo <
german.fl@samsung.com> wrote:

> A few things you can try
>
>
>
> a)      If you don’t care about virtual memory controls at all you can
> bypass it by doing the following change in the XML and restarting YARN.
>  Only you know if this is OK for the application you are trying (IMO the
> virtual memory being used is huge!)
>
>     <property>
>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>
>         <value>false</value>
>
>     </property>
>
> b)      If you still want to control the pmem/vmem, do you restart YARN
> after doing the chage in the XML file?
>
>
>
>
>
> Regards./g
>
>
>
> *From:* S.L [mailto:simpleliving016@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio
>
>  in yarn-site.xml to 10 , however the virtual memory is not getting
> increased more than 2.1 GB , as can been seen in the error message below
> and the container is being killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Hi German, Thanks for your reply!

a) Yes setting the property yarn.nodemanager.vmem-check-enabled to false seems
to have avoid the problem.

b) I woud want to set the pmem/vmem ratio to a higher value and keep the
virtual memory with in certain limits but , changing this value is not
having any effect on the Hadoop2.2 YARN .

c) Why would virtual memory increase and the physical memory stay the same
, what might be the causes that would make this happen in YARN ?

Thanks.


On Thu, Jan 2, 2014 at 11:18 AM, German Florez-Larrahondo <
german.fl@samsung.com> wrote:

> A few things you can try
>
>
>
> a)      If you don’t care about virtual memory controls at all you can
> bypass it by doing the following change in the XML and restarting YARN.
>  Only you know if this is OK for the application you are trying (IMO the
> virtual memory being used is huge!)
>
>     <property>
>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>
>         <value>false</value>
>
>     </property>
>
> b)      If you still want to control the pmem/vmem, do you restart YARN
> after doing the chage in the XML file?
>
>
>
>
>
> Regards./g
>
>
>
> *From:* S.L [mailto:simpleliving016@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio
>
>  in yarn-site.xml to 10 , however the virtual memory is not getting
> increased more than 2.1 GB , as can been seen in the error message below
> and the container is being killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Hi German, Thanks for your reply!

a) Yes setting the property yarn.nodemanager.vmem-check-enabled to false seems
to have avoid the problem.

b) I woud want to set the pmem/vmem ratio to a higher value and keep the
virtual memory with in certain limits but , changing this value is not
having any effect on the Hadoop2.2 YARN .

c) Why would virtual memory increase and the physical memory stay the same
, what might be the causes that would make this happen in YARN ?

Thanks.


On Thu, Jan 2, 2014 at 11:18 AM, German Florez-Larrahondo <
german.fl@samsung.com> wrote:

> A few things you can try
>
>
>
> a)      If you don’t care about virtual memory controls at all you can
> bypass it by doing the following change in the XML and restarting YARN.
>  Only you know if this is OK for the application you are trying (IMO the
> virtual memory being used is huge!)
>
>     <property>
>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>
>         <value>false</value>
>
>     </property>
>
> b)      If you still want to control the pmem/vmem, do you restart YARN
> after doing the chage in the XML file?
>
>
>
>
>
> Regards./g
>
>
>
> *From:* S.L [mailto:simpleliving016@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio
>
>  in yarn-site.xml to 10 , however the virtual memory is not getting
> increased more than 2.1 GB , as can been seen in the error message below
> and the container is being killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Hi German, Thanks for your reply!

a) Yes setting the property yarn.nodemanager.vmem-check-enabled to false seems
to have avoid the problem.

b) I woud want to set the pmem/vmem ratio to a higher value and keep the
virtual memory with in certain limits but , changing this value is not
having any effect on the Hadoop2.2 YARN .

c) Why would virtual memory increase and the physical memory stay the same
, what might be the causes that would make this happen in YARN ?

Thanks.


On Thu, Jan 2, 2014 at 11:18 AM, German Florez-Larrahondo <
german.fl@samsung.com> wrote:

> A few things you can try
>
>
>
> a)      If you don’t care about virtual memory controls at all you can
> bypass it by doing the following change in the XML and restarting YARN.
>  Only you know if this is OK for the application you are trying (IMO the
> virtual memory being used is huge!)
>
>     <property>
>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>
>         <value>false</value>
>
>     </property>
>
> b)      If you still want to control the pmem/vmem, do you restart YARN
> after doing the chage in the XML file?
>
>
>
>
>
> Regards./g
>
>
>
> *From:* S.L [mailto:simpleliving016@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio
>
>  in yarn-site.xml to 10 , however the virtual memory is not getting
> increased more than 2.1 GB , as can been seen in the error message below
> and the container is being killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>

RE: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by German Florez-Larrahondo <ge...@samsung.com>.
A few things you can try

 

a)      If you don't care about virtual memory controls at all you can
bypass it by doing the following change in the XML and restarting YARN.
Only you know if this is OK for the application you are trying (IMO the
virtual memory being used is huge!)

    <property>

        <name>yarn.nodemanager.vmem-check-enabled</name>

        <value>false</value>

    </property>

b)      If you still want to control the pmem/vmem, do you restart YARN
after doing the chage in the XML file?

 

 

Regards./g

 

From: S.L [mailto:simpleliving016@gmail.com] 
Sent: Wednesday, January 01, 2014 9:51 PM
To: user@hadoop.apache.org
Subject: Unable to change the virtual memory to be more than the default 2.1
GB

 

Hello Folks,

I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM. 

Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.

I have changed the ratio 

 in yarn-site.xml to 10 , however the virtual memory is not getting
increased more than 2.1 GB , as can been seen in the error message below and
the container is being killed.

Can some one please let me know if there is any other setting that needs to
be changed ? Thanks in advance!

Error Message :

INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status
: FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is
running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
/usr/local/bin/phantomjs --webdriver=15358
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stdout
2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stderr  
    |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
/usr/local/bin/phantomjs --webdriver=29062
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
/usr/local/bin/phantomjs --webdriver=5958
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
/usr/local/bin/phantomjs --webdriver=31836
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
/usr/local/bin/phantomjs --webdriver=24519
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
/usr/local/bin/phantomjs --webdriver=10175
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
/usr/local/bin/phantomjs --webdriver=5043
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4 
    |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
/usr/local/bin/phantomjs --webdriver=12650
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
/usr/local/bin/phantomjs --webdriver=18444
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 

Container killed on request. Exit code is 143


Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Vinod,

Thanks for your reply.

1. If I understand you correct you are asking me to change the memory
allocation for each map and reduce tasks , isnt this related to the
physical memory which is not an issue(with in limits) in my application ?
The problem I am facing is with the virtual memory.

2. You are right I am spawning shells , but immediately closing them after
each request in the map task.  Why would virtual memory increase and the
physical memory stay the same , what might be the causes that would make
this happen in YARN ? How can I keep it with in manageable limits ?

If I run the same task as a stand alone program it works fine , obviously
because its not a memory leak kind of a scenario affecting the physical
memory.



On Thu, Jan 2, 2014 at 1:14 PM, Vinod Kumar Vavilapalli <
vinodkv@hortonworks.com> wrote:

> You need to change the application configuration itself to tell YARN that
> each task needs more than the default. I see that this is a mapreduce app,
> so you have to change the per-application configuration:
> mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either
> mapred-site.xml or via the command line.
>
> Side notes: Seems like you are spawning lots of shells under your mapper
> and YARN's NodeManager is detecting that the total virtual memory usage is
> 14.5GB. You may want to reduce that number of shells, lest the OS itself
> might kill your tasks depend on the system configuration.
>
> Thanks,
> +Vinod
>
> On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
> to 10 , however the virtual memory is not getting increased more than 2.1
> GB , as can been seen in the error message below and the container is being
> killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Vinod,

Thanks for your reply.

1. If I understand you correct you are asking me to change the memory
allocation for each map and reduce tasks , isnt this related to the
physical memory which is not an issue(with in limits) in my application ?
The problem I am facing is with the virtual memory.

2. You are right I am spawning shells , but immediately closing them after
each request in the map task.  Why would virtual memory increase and the
physical memory stay the same , what might be the causes that would make
this happen in YARN ? How can I keep it with in manageable limits ?

If I run the same task as a stand alone program it works fine , obviously
because its not a memory leak kind of a scenario affecting the physical
memory.



On Thu, Jan 2, 2014 at 1:14 PM, Vinod Kumar Vavilapalli <
vinodkv@hortonworks.com> wrote:

> You need to change the application configuration itself to tell YARN that
> each task needs more than the default. I see that this is a mapreduce app,
> so you have to change the per-application configuration:
> mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either
> mapred-site.xml or via the command line.
>
> Side notes: Seems like you are spawning lots of shells under your mapper
> and YARN's NodeManager is detecting that the total virtual memory usage is
> 14.5GB. You may want to reduce that number of shells, lest the OS itself
> might kill your tasks depend on the system configuration.
>
> Thanks,
> +Vinod
>
> On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
> to 10 , however the virtual memory is not getting increased more than 2.1
> GB , as can been seen in the error message below and the container is being
> killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Vinod,

Thanks for your reply.

1. If I understand you correct you are asking me to change the memory
allocation for each map and reduce tasks , isnt this related to the
physical memory which is not an issue(with in limits) in my application ?
The problem I am facing is with the virtual memory.

2. You are right I am spawning shells , but immediately closing them after
each request in the map task.  Why would virtual memory increase and the
physical memory stay the same , what might be the causes that would make
this happen in YARN ? How can I keep it with in manageable limits ?

If I run the same task as a stand alone program it works fine , obviously
because its not a memory leak kind of a scenario affecting the physical
memory.



On Thu, Jan 2, 2014 at 1:14 PM, Vinod Kumar Vavilapalli <
vinodkv@hortonworks.com> wrote:

> You need to change the application configuration itself to tell YARN that
> each task needs more than the default. I see that this is a mapreduce app,
> so you have to change the per-application configuration:
> mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either
> mapred-site.xml or via the command line.
>
> Side notes: Seems like you are spawning lots of shells under your mapper
> and YARN's NodeManager is detecting that the total virtual memory usage is
> 14.5GB. You may want to reduce that number of shells, lest the OS itself
> might kill your tasks depend on the system configuration.
>
> Thanks,
> +Vinod
>
> On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
> to 10 , however the virtual memory is not getting increased more than 2.1
> GB , as can been seen in the error message below and the container is being
> killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by "S.L" <si...@gmail.com>.
Vinod,

Thanks for your reply.

1. If I understand you correct you are asking me to change the memory
allocation for each map and reduce tasks , isnt this related to the
physical memory which is not an issue(with in limits) in my application ?
The problem I am facing is with the virtual memory.

2. You are right I am spawning shells , but immediately closing them after
each request in the map task.  Why would virtual memory increase and the
physical memory stay the same , what might be the causes that would make
this happen in YARN ? How can I keep it with in manageable limits ?

If I run the same task as a stand alone program it works fine , obviously
because its not a memory leak kind of a scenario affecting the physical
memory.



On Thu, Jan 2, 2014 at 1:14 PM, Vinod Kumar Vavilapalli <
vinodkv@hortonworks.com> wrote:

> You need to change the application configuration itself to tell YARN that
> each task needs more than the default. I see that this is a mapreduce app,
> so you have to change the per-application configuration:
> mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either
> mapred-site.xml or via the command line.
>
> Side notes: Seems like you are spawning lots of shells under your mapper
> and YARN's NodeManager is detecting that the total virtual memory usage is
> 14.5GB. You may want to reduce that number of shells, lest the OS itself
> might kill your tasks depend on the system configuration.
>
> Thanks,
> +Vinod
>
> On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
> to 10 , however the virtual memory is not getting increased more than 2.1
> GB , as can been seen in the error message below and the container is being
> killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
You need to change the application configuration itself to tell YARN that each task needs more than the default. I see that this is a mapreduce app, so you have to change the per-application configuration: mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either mapred-site.xml or via the command line.

Side notes: Seems like you are spawning lots of shells under your mapper and YARN's NodeManager is detecting that the total virtual memory usage is 14.5GB. You may want to reduce that number of shells, lest the OS itself might kill your tasks depend on the system configuration.

Thanks,
+Vinod

On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:

> Hello Folks,
> 
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB RAM. 
> 
> Whenever I submit a job I get an error that says that the that the virtual memory usage exceeded , like below.
> 
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml to 10 , however the virtual memory is not getting increased more than 2.1 GB , as can been seen in the error message below and the container is being killed.
> 
> Can some one please let me know if there is any other setting that needs to be changed ? Thanks in advance!
> 
> Error Message :
> 
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728 /usr/local/bin/phantomjs --webdriver=15358 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr  
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539 /usr/local/bin/phantomjs --webdriver=29062 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727 /usr/local/bin/phantomjs --webdriver=5958 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732 /usr/local/bin/phantomjs --webdriver=31836 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538 /usr/local/bin/phantomjs --webdriver=24519 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216 /usr/local/bin/phantomjs --webdriver=10175 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036 /usr/local/bin/phantomjs --webdriver=5043 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595 /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545 /usr/local/bin/phantomjs --webdriver=12650 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542 /usr/local/bin/phantomjs --webdriver=18444 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
> 
> Container killed on request. Exit code is 143
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

RE: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by German Florez-Larrahondo <ge...@samsung.com>.
A few things you can try

 

a)      If you don't care about virtual memory controls at all you can
bypass it by doing the following change in the XML and restarting YARN.
Only you know if this is OK for the application you are trying (IMO the
virtual memory being used is huge!)

    <property>

        <name>yarn.nodemanager.vmem-check-enabled</name>

        <value>false</value>

    </property>

b)      If you still want to control the pmem/vmem, do you restart YARN
after doing the chage in the XML file?

 

 

Regards./g

 

From: S.L [mailto:simpleliving016@gmail.com] 
Sent: Wednesday, January 01, 2014 9:51 PM
To: user@hadoop.apache.org
Subject: Unable to change the virtual memory to be more than the default 2.1
GB

 

Hello Folks,

I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM. 

Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.

I have changed the ratio 

 in yarn-site.xml to 10 , however the virtual memory is not getting
increased more than 2.1 GB , as can been seen in the error message below and
the container is being killed.

Can some one please let me know if there is any other setting that needs to
be changed ? Thanks in advance!

Error Message :

INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status
: FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is
running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
/usr/local/bin/phantomjs --webdriver=15358
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stdout
2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stderr  
    |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
/usr/local/bin/phantomjs --webdriver=29062
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
/usr/local/bin/phantomjs --webdriver=5958
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
/usr/local/bin/phantomjs --webdriver=31836
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
/usr/local/bin/phantomjs --webdriver=24519
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
/usr/local/bin/phantomjs --webdriver=10175
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
/usr/local/bin/phantomjs --webdriver=5043
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4 
    |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
/usr/local/bin/phantomjs --webdriver=12650
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
/usr/local/bin/phantomjs --webdriver=18444
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 

Container killed on request. Exit code is 143


RE: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by German Florez-Larrahondo <ge...@samsung.com>.
A few things you can try

 

a)      If you don't care about virtual memory controls at all you can
bypass it by doing the following change in the XML and restarting YARN.
Only you know if this is OK for the application you are trying (IMO the
virtual memory being used is huge!)

    <property>

        <name>yarn.nodemanager.vmem-check-enabled</name>

        <value>false</value>

    </property>

b)      If you still want to control the pmem/vmem, do you restart YARN
after doing the chage in the XML file?

 

 

Regards./g

 

From: S.L [mailto:simpleliving016@gmail.com] 
Sent: Wednesday, January 01, 2014 9:51 PM
To: user@hadoop.apache.org
Subject: Unable to change the virtual memory to be more than the default 2.1
GB

 

Hello Folks,

I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM. 

Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.

I have changed the ratio 

 in yarn-site.xml to 10 , however the virtual memory is not getting
increased more than 2.1 GB , as can been seen in the error message below and
the container is being killed.

Can some one please let me know if there is any other setting that needs to
be changed ? Thanks in advance!

Error Message :

INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status
: FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is
running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
/usr/local/bin/phantomjs --webdriver=15358
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stdout
2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stderr  
    |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
/usr/local/bin/phantomjs --webdriver=29062
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
/usr/local/bin/phantomjs --webdriver=5958
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
/usr/local/bin/phantomjs --webdriver=31836
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
/usr/local/bin/phantomjs --webdriver=24519
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
/usr/local/bin/phantomjs --webdriver=10175
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
/usr/local/bin/phantomjs --webdriver=5043
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4 
    |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
/usr/local/bin/phantomjs --webdriver=12650
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
/usr/local/bin/phantomjs --webdriver=18444
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 

Container killed on request. Exit code is 143


RE: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by German Florez-Larrahondo <ge...@samsung.com>.
A few things you can try

 

a)      If you don't care about virtual memory controls at all you can
bypass it by doing the following change in the XML and restarting YARN.
Only you know if this is OK for the application you are trying (IMO the
virtual memory being used is huge!)

    <property>

        <name>yarn.nodemanager.vmem-check-enabled</name>

        <value>false</value>

    </property>

b)      If you still want to control the pmem/vmem, do you restart YARN
after doing the chage in the XML file?

 

 

Regards./g

 

From: S.L [mailto:simpleliving016@gmail.com] 
Sent: Wednesday, January 01, 2014 9:51 PM
To: user@hadoop.apache.org
Subject: Unable to change the virtual memory to be more than the default 2.1
GB

 

Hello Folks,

I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM. 

Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.

I have changed the ratio 

 in yarn-site.xml to 10 , however the virtual memory is not getting
increased more than 2.1 GB , as can been seen in the error message below and
the container is being killed.

Can some one please let me know if there is any other setting that needs to
be changed ? Thanks in advance!

Error Message :

INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status
: FAILED
Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is
running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1388632710048_0009_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
/usr/local/bin/phantomjs --webdriver=15358
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN  -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4
1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stdout
2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/co
ntainer_1388632710048_0009_01_000004/stderr  
    |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
/usr/local/bin/phantomjs --webdriver=29062
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
/usr/local/bin/phantomjs --webdriver=5958
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
/usr/local/bin/phantomjs --webdriver=31836
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
/usr/local/bin/phantomjs --webdriver=24519
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
/usr/local/bin/phantomjs --webdriver=10175
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
/usr/local/bin/phantomjs --webdriver=5043
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
/usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache
/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/applic
ation_1388632710048_0009/container_1388632710048_0009_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
attempt_1388632710048_0009_m_000000_2 4 
    |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
/usr/local/bin/phantomjs --webdriver=12650
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 
    |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
/usr/local/bin/phantomjs --webdriver=18444
--webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appca
che/application_1388632710048_0009/container_1388632710048_0009_01_000004/ph
antomjsdriver.log 

Container killed on request. Exit code is 143


Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
You need to change the application configuration itself to tell YARN that each task needs more than the default. I see that this is a mapreduce app, so you have to change the per-application configuration: mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either mapred-site.xml or via the command line.

Side notes: Seems like you are spawning lots of shells under your mapper and YARN's NodeManager is detecting that the total virtual memory usage is 14.5GB. You may want to reduce that number of shells, lest the OS itself might kill your tasks depend on the system configuration.

Thanks,
+Vinod

On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:

> Hello Folks,
> 
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB RAM. 
> 
> Whenever I submit a job I get an error that says that the that the virtual memory usage exceeded , like below.
> 
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml to 10 , however the virtual memory is not getting increased more than 2.1 GB , as can been seen in the error message below and the container is being killed.
> 
> Can some one please let me know if there is any other setting that needs to be changed ? Thanks in advance!
> 
> Error Message :
> 
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728 /usr/local/bin/phantomjs --webdriver=15358 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr  
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539 /usr/local/bin/phantomjs --webdriver=29062 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727 /usr/local/bin/phantomjs --webdriver=5958 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732 /usr/local/bin/phantomjs --webdriver=31836 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538 /usr/local/bin/phantomjs --webdriver=24519 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216 /usr/local/bin/phantomjs --webdriver=10175 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036 /usr/local/bin/phantomjs --webdriver=5043 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595 /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545 /usr/local/bin/phantomjs --webdriver=12650 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542 /usr/local/bin/phantomjs --webdriver=18444 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
> 
> Container killed on request. Exit code is 143
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
You need to change the application configuration itself to tell YARN that each task needs more than the default. I see that this is a mapreduce app, so you have to change the per-application configuration: mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either mapred-site.xml or via the command line.

Side notes: Seems like you are spawning lots of shells under your mapper and YARN's NodeManager is detecting that the total virtual memory usage is 14.5GB. You may want to reduce that number of shells, lest the OS itself might kill your tasks depend on the system configuration.

Thanks,
+Vinod

On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:

> Hello Folks,
> 
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB RAM. 
> 
> Whenever I submit a job I get an error that says that the that the virtual memory usage exceeded , like below.
> 
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml to 10 , however the virtual memory is not getting increased more than 2.1 GB , as can been seen in the error message below and the container is being killed.
> 
> Can some one please let me know if there is any other setting that needs to be changed ? Thanks in advance!
> 
> Error Message :
> 
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728 /usr/local/bin/phantomjs --webdriver=15358 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr  
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539 /usr/local/bin/phantomjs --webdriver=29062 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727 /usr/local/bin/phantomjs --webdriver=5958 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732 /usr/local/bin/phantomjs --webdriver=31836 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538 /usr/local/bin/phantomjs --webdriver=24519 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216 /usr/local/bin/phantomjs --webdriver=10175 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036 /usr/local/bin/phantomjs --webdriver=5043 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595 /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545 /usr/local/bin/phantomjs --webdriver=12650 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542 /usr/local/bin/phantomjs --webdriver=18444 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
> 
> Container killed on request. Exit code is 143
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Unable to change the virtual memory to be more than the default 2.1 GB

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
You need to change the application configuration itself to tell YARN that each task needs more than the default. I see that this is a mapreduce app, so you have to change the per-application configuration: mapreduce.map.memory.mb and mapreduce.reduce.memory.mb in either mapred-site.xml or via the command line.

Side notes: Seems like you are spawning lots of shells under your mapper and YARN's NodeManager is detecting that the total virtual memory usage is 14.5GB. You may want to reduce that number of shells, lest the OS itself might kill your tasks depend on the system configuration.

Thanks,
+Vinod

On Jan 1, 2014, at 7:50 PM, S.L <si...@gmail.com> wrote:

> Hello Folks,
> 
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB RAM. 
> 
> Whenever I submit a job I get an error that says that the that the virtual memory usage exceeded , like below.
> 
> I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml to 10 , however the virtual memory is not getting increased more than 2.1 GB , as can been seen in the error message below and the container is being killed.
> 
> Can some one please let me know if there is any other setting that needs to be changed ? Thanks in advance!
> 
> Error Message :
> 
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2, Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004] is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728 /usr/local/bin/phantomjs --webdriver=15358 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr  
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539 /usr/local/bin/phantomjs --webdriver=29062 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727 /usr/local/bin/phantomjs --webdriver=5958 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732 /usr/local/bin/phantomjs --webdriver=31836 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538 /usr/local/bin/phantomjs --webdriver=24519 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216 /usr/local/bin/phantomjs --webdriver=10175 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036 /usr/local/bin/phantomjs --webdriver=5043 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595 /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498 attempt_1388632710048_0009_m_000000_2 4 
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545 /usr/local/bin/phantomjs --webdriver=12650 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542 /usr/local/bin/phantomjs --webdriver=18444 --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log 
> 
> Container killed on request. Exit code is 143
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.