You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by panfei <cn...@gmail.com> on 2013/12/04 14:16:46 UTC

Fwd: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

---------- Forwarded message ----------
From: panfei <cn...@gmail.com>
Date: 2013/12/4
Subject: Container
[pid=22885,containerID=container_1386156666044_0001_01_000013] is running
beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
To: CDH Users <cd...@cloudera.org>


Hi All:

We are using CDH4.5 Hadoop for our production, when submit some (not all)
jobs from hive, we get the following exception info , seems the physical
memory and virtual memory both not enough for the job to run:


Task with the most failures(4):
-----
Task ID:
  task_1386156666044_0001_m_000000

URL:

http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
-----
Diagnostic Messages for this Task:
Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is
running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1386156666044_0001_01_000013 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
/usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx200m
-Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
-Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
attempt_1386156666044_0001_m_000000_3 13

following is some of our configuration:

  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>12288</value>
  </property>

  <property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>8</value>
  </property>

  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>6</value>
  </property>

can you give me some advice? thanks a lot.
-- 
不学习,不知道



-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi All, thanks for your replies, I have found the point: there is a Memory
Leak(open a file for each record) in the Hive UDF method, after fixing it,
everything goes well now.


2013/12/6 Vinod Kumar Vavilapalli <vi...@hortonworks.com>

> Something looks really bad on your cluster. The JVM's heap size is 200MB
> but its virtual memory has ballooned to a monstrous 332GB. Does that ring
> any bell? Can you run regular java applications on this node? This doesn't
> seem related to YARN per-se.
>
> +Vinod
> Hortonworks Inc.
> http://hortonworks.com/
>
>
> On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:
>
>>
>>
>> ---------- Forwarded message ----------
>> From: panfei <cn...@gmail.com>
>> Date: 2013/12/4
>> Subject: Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
>> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> To: CDH Users <cd...@cloudera.org>
>>
>>
>> Hi All:
>>
>> We are using CDH4.5 Hadoop for our production, when submit some (not all)
>> jobs from hive, we get the following exception info , seems the physical
>> memory and virtual memory both not enough for the job to run:
>>
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_1386156666044_0001_m_000000
>>
>> URL:
>>
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> -----
>> Diagnostic Messages for this Task:
>> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
>> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> container.
>> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx200m
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> -Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> attempt_1386156666044_0001_m_000000_3 13
>>
>> following is some of our configuration:
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.memory-mb</name>
>>     <value>12288</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>     <value>8</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-check-enabled</name>
>>     <value>false</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>     <value>6</value>
>>   </property>
>>
>> can you give me some advice? thanks a lot.
>> --
>> 不学习,不知道
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.




-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi All, thanks for your replies, I have found the point: there is a Memory
Leak(open a file for each record) in the Hive UDF method, after fixing it,
everything goes well now.


2013/12/6 Vinod Kumar Vavilapalli <vi...@hortonworks.com>

> Something looks really bad on your cluster. The JVM's heap size is 200MB
> but its virtual memory has ballooned to a monstrous 332GB. Does that ring
> any bell? Can you run regular java applications on this node? This doesn't
> seem related to YARN per-se.
>
> +Vinod
> Hortonworks Inc.
> http://hortonworks.com/
>
>
> On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:
>
>>
>>
>> ---------- Forwarded message ----------
>> From: panfei <cn...@gmail.com>
>> Date: 2013/12/4
>> Subject: Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
>> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> To: CDH Users <cd...@cloudera.org>
>>
>>
>> Hi All:
>>
>> We are using CDH4.5 Hadoop for our production, when submit some (not all)
>> jobs from hive, we get the following exception info , seems the physical
>> memory and virtual memory both not enough for the job to run:
>>
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_1386156666044_0001_m_000000
>>
>> URL:
>>
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> -----
>> Diagnostic Messages for this Task:
>> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
>> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> container.
>> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx200m
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> -Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> attempt_1386156666044_0001_m_000000_3 13
>>
>> following is some of our configuration:
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.memory-mb</name>
>>     <value>12288</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>     <value>8</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-check-enabled</name>
>>     <value>false</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>     <value>6</value>
>>   </property>
>>
>> can you give me some advice? thanks a lot.
>> --
>> 不学习,不知道
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.




-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi All, thanks for your replies, I have found the point: there is a Memory
Leak(open a file for each record) in the Hive UDF method, after fixing it,
everything goes well now.


2013/12/6 Vinod Kumar Vavilapalli <vi...@hortonworks.com>

> Something looks really bad on your cluster. The JVM's heap size is 200MB
> but its virtual memory has ballooned to a monstrous 332GB. Does that ring
> any bell? Can you run regular java applications on this node? This doesn't
> seem related to YARN per-se.
>
> +Vinod
> Hortonworks Inc.
> http://hortonworks.com/
>
>
> On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:
>
>>
>>
>> ---------- Forwarded message ----------
>> From: panfei <cn...@gmail.com>
>> Date: 2013/12/4
>> Subject: Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
>> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> To: CDH Users <cd...@cloudera.org>
>>
>>
>> Hi All:
>>
>> We are using CDH4.5 Hadoop for our production, when submit some (not all)
>> jobs from hive, we get the following exception info , seems the physical
>> memory and virtual memory both not enough for the job to run:
>>
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_1386156666044_0001_m_000000
>>
>> URL:
>>
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> -----
>> Diagnostic Messages for this Task:
>> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
>> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> container.
>> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx200m
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> -Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> attempt_1386156666044_0001_m_000000_3 13
>>
>> following is some of our configuration:
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.memory-mb</name>
>>     <value>12288</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>     <value>8</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-check-enabled</name>
>>     <value>false</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>     <value>6</value>
>>   </property>
>>
>> can you give me some advice? thanks a lot.
>> --
>> 不学习,不知道
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.




-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi All, thanks for your replies, I have found the point: there is a Memory
Leak(open a file for each record) in the Hive UDF method, after fixing it,
everything goes well now.


2013/12/6 Vinod Kumar Vavilapalli <vi...@hortonworks.com>

> Something looks really bad on your cluster. The JVM's heap size is 200MB
> but its virtual memory has ballooned to a monstrous 332GB. Does that ring
> any bell? Can you run regular java applications on this node? This doesn't
> seem related to YARN per-se.
>
> +Vinod
> Hortonworks Inc.
> http://hortonworks.com/
>
>
> On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:
>
>>
>>
>> ---------- Forwarded message ----------
>> From: panfei <cn...@gmail.com>
>> Date: 2013/12/4
>> Subject: Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
>> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> To: CDH Users <cd...@cloudera.org>
>>
>>
>> Hi All:
>>
>> We are using CDH4.5 Hadoop for our production, when submit some (not all)
>> jobs from hive, we get the following exception info , seems the physical
>> memory and virtual memory both not enough for the job to run:
>>
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_1386156666044_0001_m_000000
>>
>> URL:
>>
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> -----
>> Diagnostic Messages for this Task:
>> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
>> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> container.
>> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> -Dhadoop.metrics.log.level=WARN -Xmx200m
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> -Dlog4j.configuration=container-log4j.properties
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
>> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> attempt_1386156666044_0001_m_000000_3 13
>>
>> following is some of our configuration:
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.memory-mb</name>
>>     <value>12288</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>     <value>8</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.vmem-check-enabled</name>
>>     <value>false</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>     <value>6</value>
>>   </property>
>>
>> can you give me some advice? thanks a lot.
>> --
>> 不学习,不知道
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.




-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Something looks really bad on your cluster. The JVM's heap size is 200MB
but its virtual memory has ballooned to a monstrous 332GB. Does that ring
any bell? Can you run regular java applications on this node? This doesn't
seem related to YARN per-se.

+Vinod
Hortonworks Inc.
http://hortonworks.com/


On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:

>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?


regards


2013/12/5 panfei <cn...@gmail.com>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <yy...@gmail.com>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <cn...@gmail.com>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <cn...@gmail.com>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <cd...@cloudera.org>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>>> >     <value>12288</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>>> >     <value>8</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>>> >     <value>false</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>>> >     <value>6</value>
>>>> >   </property>
>>>> >
>>>> > can you give me some advice? thanks a lot.
>>>> > --
>>>> > 不学习,不知道
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > 不学习,不知道
>>>>
>>>>
>>>>
>>>> --
>>>> - Tsuyoshi
>>>>
>>>
>>>
>>>
>>> --
>>> 不学习,不知道
>>>
>>
>>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?


regards


2013/12/5 panfei <cn...@gmail.com>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <yy...@gmail.com>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <cn...@gmail.com>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <cn...@gmail.com>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <cd...@cloudera.org>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>>> >     <value>12288</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>>> >     <value>8</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>>> >     <value>false</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>>> >     <value>6</value>
>>>> >   </property>
>>>> >
>>>> > can you give me some advice? thanks a lot.
>>>> > --
>>>> > 不学习,不知道
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > 不学习,不知道
>>>>
>>>>
>>>>
>>>> --
>>>> - Tsuyoshi
>>>>
>>>
>>>
>>>
>>> --
>>> 不学习,不知道
>>>
>>
>>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?


regards


2013/12/5 panfei <cn...@gmail.com>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <yy...@gmail.com>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <cn...@gmail.com>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <cn...@gmail.com>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <cd...@cloudera.org>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>>> >     <value>12288</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>>> >     <value>8</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>>> >     <value>false</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>>> >     <value>6</value>
>>>> >   </property>
>>>> >
>>>> > can you give me some advice? thanks a lot.
>>>> > --
>>>> > 不学习,不知道
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > 不学习,不知道
>>>>
>>>>
>>>>
>>>> --
>>>> - Tsuyoshi
>>>>
>>>
>>>
>>>
>>> --
>>> 不学习,不知道
>>>
>>
>>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

  Have your spread you config over your cluster.

  And do you take a look whether the error containers are on any concentrated
nodes?


regards


2013/12/5 panfei <cn...@gmail.com>

> Hi YouPeng, thanks for your advice. I have read the docs and configure the
> parameters as follows:
>
> Physical Server: 8 cores CPU, 16GB memory.
>
> For YARN:
>
> yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
>
> yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
> allocation unit for the container.
>
> yarn.nodemanager.vmem-pmem-ratio is the default value 2.1
>
>
> FOR MAPREDUCE:
>
> mapreduce.map.memory.mb set to 2048 for map task containers.
>
> mapreduce.reduce.memory.mb set to 4096 for reduce task containers.
>
> mapreduce.map.java.opts set to -Xmx1536m
>
> mapreduce.reduce.java.opts set to -Xmx3072m
>
>
>
> after setting theses parameters, the problem still there, I think it's
> time to get back to HADOOP 1.0 infrastructure.
>
> thanks for your advice again.
>
>
>
> 2013/12/5 YouPeng Yang <yy...@gmail.com>
>
>> Hi
>>
>>  please reference to
>> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>>
>>
>>
>> 2013/12/5 panfei <cn...@gmail.com>
>>
>>> we have already tried several values of these two parameters, but it
>>> seems no use.
>>>
>>>
>>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>>
>>>> Hi,
>>>>
>>>> Please check the properties like mapreduce.reduce.memory.mb and
>>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>>> resource limits for mappers/reducers.
>>>>
>>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>>> >
>>>> >
>>>> > ---------- Forwarded message ----------
>>>> > From: panfei <cn...@gmail.com>
>>>> > Date: 2013/12/4
>>>> > Subject: Container
>>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> running
>>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>>> memory
>>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>>> > To: CDH Users <cd...@cloudera.org>
>>>> >
>>>> >
>>>> > Hi All:
>>>> >
>>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>>> all)
>>>> > jobs from hive, we get the following exception info , seems the
>>>> physical
>>>> > memory and virtual memory both not enough for the job to run:
>>>> >
>>>> >
>>>> > Task with the most failures(4):
>>>> > -----
>>>> > Task ID:
>>>> >   task_1386156666044_0001_m_000000
>>>> >
>>>> > URL:
>>>> >
>>>> >
>>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>>> > -----
>>>> > Diagnostic Messages for this Task:
>>>> > Container
>>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>>> > container.
>>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES)
>>>> FULL_CMD_LINE
>>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>>> >
>>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>>> > -Dlog4j.configuration=container-log4j.properties
>>>> >
>>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>>> -Dhadoop.root.logger=INFO,CLA
>>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>>> > attempt_1386156666044_0001_m_000000_3 13
>>>> >
>>>> > following is some of our configuration:
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>>> >     <value>12288</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>>> >     <value>8</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>>> >     <value>false</value>
>>>> >   </property>
>>>> >
>>>> >   <property>
>>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>>> >     <value>6</value>
>>>> >   </property>
>>>> >
>>>> > can you give me some advice? thanks a lot.
>>>> > --
>>>> > 不学习,不知道
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > 不学习,不知道
>>>>
>>>>
>>>>
>>>> --
>>>> - Tsuyoshi
>>>>
>>>
>>>
>>>
>>> --
>>> 不学习,不知道
>>>
>>
>>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:

Physical Server: 8 cores CPU, 16GB memory.

For YARN:

yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.

yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
allocation unit for the container.

yarn.nodemanager.vmem-pmem-ratio is the default value 2.1


FOR MAPREDUCE:

mapreduce.map.memory.mb set to 2048 for map task containers.

mapreduce.reduce.memory.mb set to 4096 for reduce task containers.

mapreduce.map.java.opts set to -Xmx1536m

mapreduce.reduce.java.opts set to -Xmx3072m



after setting theses parameters, the problem still there, I think it's time
to get back to HADOOP 1.0 infrastructure.

thanks for your advice again.



2013/12/5 YouPeng Yang <yy...@gmail.com>

> Hi
>
>  please reference to
> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>
>
>
> 2013/12/5 panfei <cn...@gmail.com>
>
>> we have already tried several values of these two parameters, but it
>> seems no use.
>>
>>
>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>
>>> Hi,
>>>
>>> Please check the properties like mapreduce.reduce.memory.mb and
>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>> resource limits for mappers/reducers.
>>>
>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>> >
>>> >
>>> > ---------- Forwarded message ----------
>>> > From: panfei <cn...@gmail.com>
>>> > Date: 2013/12/4
>>> > Subject: Container
>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> running
>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>> memory
>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>> > To: CDH Users <cd...@cloudera.org>
>>> >
>>> >
>>> > Hi All:
>>> >
>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>> all)
>>> > jobs from hive, we get the following exception info , seems the
>>> physical
>>> > memory and virtual memory both not enough for the job to run:
>>> >
>>> >
>>> > Task with the most failures(4):
>>> > -----
>>> > Task ID:
>>> >   task_1386156666044_0001_m_000000
>>> >
>>> > URL:
>>> >
>>> >
>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>> > -----
>>> > Diagnostic Messages for this Task:
>>> > Container
>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>> > container.
>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>> >
>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>> > -Dlog4j.configuration=container-log4j.properties
>>> >
>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>> -Dhadoop.root.logger=INFO,CLA
>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>> > attempt_1386156666044_0001_m_000000_3 13
>>> >
>>> > following is some of our configuration:
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>> >     <value>12288</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>> >     <value>8</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>> >     <value>false</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>> >     <value>6</value>
>>> >   </property>
>>> >
>>> > can you give me some advice? thanks a lot.
>>> > --
>>> > 不学习,不知道
>>> >
>>> >
>>> >
>>> > --
>>> > 不学习,不知道
>>>
>>>
>>>
>>> --
>>> - Tsuyoshi
>>>
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>


-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:

Physical Server: 8 cores CPU, 16GB memory.

For YARN:

yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.

yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
allocation unit for the container.

yarn.nodemanager.vmem-pmem-ratio is the default value 2.1


FOR MAPREDUCE:

mapreduce.map.memory.mb set to 2048 for map task containers.

mapreduce.reduce.memory.mb set to 4096 for reduce task containers.

mapreduce.map.java.opts set to -Xmx1536m

mapreduce.reduce.java.opts set to -Xmx3072m



after setting theses parameters, the problem still there, I think it's time
to get back to HADOOP 1.0 infrastructure.

thanks for your advice again.



2013/12/5 YouPeng Yang <yy...@gmail.com>

> Hi
>
>  please reference to
> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>
>
>
> 2013/12/5 panfei <cn...@gmail.com>
>
>> we have already tried several values of these two parameters, but it
>> seems no use.
>>
>>
>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>
>>> Hi,
>>>
>>> Please check the properties like mapreduce.reduce.memory.mb and
>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>> resource limits for mappers/reducers.
>>>
>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>> >
>>> >
>>> > ---------- Forwarded message ----------
>>> > From: panfei <cn...@gmail.com>
>>> > Date: 2013/12/4
>>> > Subject: Container
>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> running
>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>> memory
>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>> > To: CDH Users <cd...@cloudera.org>
>>> >
>>> >
>>> > Hi All:
>>> >
>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>> all)
>>> > jobs from hive, we get the following exception info , seems the
>>> physical
>>> > memory and virtual memory both not enough for the job to run:
>>> >
>>> >
>>> > Task with the most failures(4):
>>> > -----
>>> > Task ID:
>>> >   task_1386156666044_0001_m_000000
>>> >
>>> > URL:
>>> >
>>> >
>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>> > -----
>>> > Diagnostic Messages for this Task:
>>> > Container
>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>> > container.
>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>> >
>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>> > -Dlog4j.configuration=container-log4j.properties
>>> >
>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>> -Dhadoop.root.logger=INFO,CLA
>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>> > attempt_1386156666044_0001_m_000000_3 13
>>> >
>>> > following is some of our configuration:
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>> >     <value>12288</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>> >     <value>8</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>> >     <value>false</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>> >     <value>6</value>
>>> >   </property>
>>> >
>>> > can you give me some advice? thanks a lot.
>>> > --
>>> > 不学习,不知道
>>> >
>>> >
>>> >
>>> > --
>>> > 不学习,不知道
>>>
>>>
>>>
>>> --
>>> - Tsuyoshi
>>>
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>


-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:

Physical Server: 8 cores CPU, 16GB memory.

For YARN:

yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.

yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
allocation unit for the container.

yarn.nodemanager.vmem-pmem-ratio is the default value 2.1


FOR MAPREDUCE:

mapreduce.map.memory.mb set to 2048 for map task containers.

mapreduce.reduce.memory.mb set to 4096 for reduce task containers.

mapreduce.map.java.opts set to -Xmx1536m

mapreduce.reduce.java.opts set to -Xmx3072m



after setting theses parameters, the problem still there, I think it's time
to get back to HADOOP 1.0 infrastructure.

thanks for your advice again.



2013/12/5 YouPeng Yang <yy...@gmail.com>

> Hi
>
>  please reference to
> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>
>
>
> 2013/12/5 panfei <cn...@gmail.com>
>
>> we have already tried several values of these two parameters, but it
>> seems no use.
>>
>>
>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>
>>> Hi,
>>>
>>> Please check the properties like mapreduce.reduce.memory.mb and
>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>> resource limits for mappers/reducers.
>>>
>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>> >
>>> >
>>> > ---------- Forwarded message ----------
>>> > From: panfei <cn...@gmail.com>
>>> > Date: 2013/12/4
>>> > Subject: Container
>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> running
>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>> memory
>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>> > To: CDH Users <cd...@cloudera.org>
>>> >
>>> >
>>> > Hi All:
>>> >
>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>> all)
>>> > jobs from hive, we get the following exception info , seems the
>>> physical
>>> > memory and virtual memory both not enough for the job to run:
>>> >
>>> >
>>> > Task with the most failures(4):
>>> > -----
>>> > Task ID:
>>> >   task_1386156666044_0001_m_000000
>>> >
>>> > URL:
>>> >
>>> >
>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>> > -----
>>> > Diagnostic Messages for this Task:
>>> > Container
>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>> > container.
>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>> >
>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>> > -Dlog4j.configuration=container-log4j.properties
>>> >
>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>> -Dhadoop.root.logger=INFO,CLA
>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>> > attempt_1386156666044_0001_m_000000_3 13
>>> >
>>> > following is some of our configuration:
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>> >     <value>12288</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>> >     <value>8</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>> >     <value>false</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>> >     <value>6</value>
>>> >   </property>
>>> >
>>> > can you give me some advice? thanks a lot.
>>> > --
>>> > 不学习,不知道
>>> >
>>> >
>>> >
>>> > --
>>> > 不学习,不知道
>>>
>>>
>>>
>>> --
>>> - Tsuyoshi
>>>
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>


-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:

Physical Server: 8 cores CPU, 16GB memory.

For YARN:

yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.

yarn.scheduler.minimum-allocation-mb set to 2048M  as the minimum
allocation unit for the container.

yarn.nodemanager.vmem-pmem-ratio is the default value 2.1


FOR MAPREDUCE:

mapreduce.map.memory.mb set to 2048 for map task containers.

mapreduce.reduce.memory.mb set to 4096 for reduce task containers.

mapreduce.map.java.opts set to -Xmx1536m

mapreduce.reduce.java.opts set to -Xmx3072m



after setting theses parameters, the problem still there, I think it's time
to get back to HADOOP 1.0 infrastructure.

thanks for your advice again.



2013/12/5 YouPeng Yang <yy...@gmail.com>

> Hi
>
>  please reference to
> http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
>
>
>
> 2013/12/5 panfei <cn...@gmail.com>
>
>> we have already tried several values of these two parameters, but it
>> seems no use.
>>
>>
>> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>>
>>> Hi,
>>>
>>> Please check the properties like mapreduce.reduce.memory.mb and
>>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>>> resource limits for mappers/reducers.
>>>
>>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>>> >
>>> >
>>> > ---------- Forwarded message ----------
>>> > From: panfei <cn...@gmail.com>
>>> > Date: 2013/12/4
>>> > Subject: Container
>>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> running
>>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>>> memory
>>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>>> > To: CDH Users <cd...@cloudera.org>
>>> >
>>> >
>>> > Hi All:
>>> >
>>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>>> all)
>>> > jobs from hive, we get the following exception info , seems the
>>> physical
>>> > memory and virtual memory both not enough for the job to run:
>>> >
>>> >
>>> > Task with the most failures(4):
>>> > -----
>>> > Task ID:
>>> >   task_1386156666044_0001_m_000000
>>> >
>>> > URL:
>>> >
>>> >
>>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>>> > -----
>>> > Diagnostic Messages for this Task:
>>> > Container
>>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>>> > container.
>>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>>> >
>>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>>> > -Dlog4j.configuration=container-log4j.properties
>>> >
>>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>>> > -Dyarn.app.mapreduce.container.log.filesize=0
>>> -Dhadoop.root.logger=INFO,CLA
>>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>>> > attempt_1386156666044_0001_m_000000_3 13
>>> >
>>> > following is some of our configuration:
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>>> >     <value>12288</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>>> >     <value>8</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>>> >     <value>false</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>>> >     <value>6</value>
>>> >   </property>
>>> >
>>> > can you give me some advice? thanks a lot.
>>> > --
>>> > 不学习,不知道
>>> >
>>> >
>>> >
>>> > --
>>> > 不学习,不知道
>>>
>>>
>>>
>>> --
>>> - Tsuyoshi
>>>
>>
>>
>>
>> --
>> 不学习,不知道
>>
>
>


-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

 please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/



2013/12/5 panfei <cn...@gmail.com>

> we have already tried several values of these two parameters, but it seems
> no use.
>
>
> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>
>> Hi,
>>
>> Please check the properties like mapreduce.reduce.memory.mb and
>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>> resource limits for mappers/reducers.
>>
>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: panfei <cn...@gmail.com>
>> > Date: 2013/12/4
>> > Subject: Container
>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> running
>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory
>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> > To: CDH Users <cd...@cloudera.org>
>> >
>> >
>> > Hi All:
>> >
>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>> all)
>> > jobs from hive, we get the following exception info , seems the physical
>> > memory and virtual memory both not enough for the job to run:
>> >
>> >
>> > Task with the most failures(4):
>> > -----
>> > Task ID:
>> >   task_1386156666044_0001_m_000000
>> >
>> > URL:
>> >
>> >
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> > -----
>> > Diagnostic Messages for this Task:
>> > Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> > container.
>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>> >
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> > -Dlog4j.configuration=container-log4j.properties
>> >
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> > -Dyarn.app.mapreduce.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA
>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> > attempt_1386156666044_0001_m_000000_3 13
>> >
>> > following is some of our configuration:
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>> >     <value>12288</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>> >     <value>8</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>> >     <value>false</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>> >     <value>6</value>
>> >   </property>
>> >
>> > can you give me some advice? thanks a lot.
>> > --
>> > 不学习,不知道
>> >
>> >
>> >
>> > --
>> > 不学习,不知道
>>
>>
>>
>> --
>> - Tsuyoshi
>>
>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

 please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/



2013/12/5 panfei <cn...@gmail.com>

> we have already tried several values of these two parameters, but it seems
> no use.
>
>
> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>
>> Hi,
>>
>> Please check the properties like mapreduce.reduce.memory.mb and
>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>> resource limits for mappers/reducers.
>>
>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: panfei <cn...@gmail.com>
>> > Date: 2013/12/4
>> > Subject: Container
>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> running
>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory
>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> > To: CDH Users <cd...@cloudera.org>
>> >
>> >
>> > Hi All:
>> >
>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>> all)
>> > jobs from hive, we get the following exception info , seems the physical
>> > memory and virtual memory both not enough for the job to run:
>> >
>> >
>> > Task with the most failures(4):
>> > -----
>> > Task ID:
>> >   task_1386156666044_0001_m_000000
>> >
>> > URL:
>> >
>> >
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> > -----
>> > Diagnostic Messages for this Task:
>> > Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> > container.
>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>> >
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> > -Dlog4j.configuration=container-log4j.properties
>> >
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> > -Dyarn.app.mapreduce.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA
>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> > attempt_1386156666044_0001_m_000000_3 13
>> >
>> > following is some of our configuration:
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>> >     <value>12288</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>> >     <value>8</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>> >     <value>false</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>> >     <value>6</value>
>> >   </property>
>> >
>> > can you give me some advice? thanks a lot.
>> > --
>> > 不学习,不知道
>> >
>> >
>> >
>> > --
>> > 不学习,不知道
>>
>>
>>
>> --
>> - Tsuyoshi
>>
>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

 please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/



2013/12/5 panfei <cn...@gmail.com>

> we have already tried several values of these two parameters, but it seems
> no use.
>
>
> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>
>> Hi,
>>
>> Please check the properties like mapreduce.reduce.memory.mb and
>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>> resource limits for mappers/reducers.
>>
>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: panfei <cn...@gmail.com>
>> > Date: 2013/12/4
>> > Subject: Container
>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> running
>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory
>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> > To: CDH Users <cd...@cloudera.org>
>> >
>> >
>> > Hi All:
>> >
>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>> all)
>> > jobs from hive, we get the following exception info , seems the physical
>> > memory and virtual memory both not enough for the job to run:
>> >
>> >
>> > Task with the most failures(4):
>> > -----
>> > Task ID:
>> >   task_1386156666044_0001_m_000000
>> >
>> > URL:
>> >
>> >
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> > -----
>> > Diagnostic Messages for this Task:
>> > Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> > container.
>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>> >
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> > -Dlog4j.configuration=container-log4j.properties
>> >
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> > -Dyarn.app.mapreduce.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA
>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> > attempt_1386156666044_0001_m_000000_3 13
>> >
>> > following is some of our configuration:
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>> >     <value>12288</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>> >     <value>8</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>> >     <value>false</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>> >     <value>6</value>
>> >   </property>
>> >
>> > can you give me some advice? thanks a lot.
>> > --
>> > 不学习,不知道
>> >
>> >
>> >
>> > --
>> > 不学习,不知道
>>
>>
>>
>> --
>> - Tsuyoshi
>>
>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by YouPeng Yang <yy...@gmail.com>.
Hi

 please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/



2013/12/5 panfei <cn...@gmail.com>

> we have already tried several values of these two parameters, but it seems
> no use.
>
>
> 2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>
>
>> Hi,
>>
>> Please check the properties like mapreduce.reduce.memory.mb and
>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>> resource limits for mappers/reducers.
>>
>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: panfei <cn...@gmail.com>
>> > Date: 2013/12/4
>> > Subject: Container
>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> running
>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory
>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> > To: CDH Users <cd...@cloudera.org>
>> >
>> >
>> > Hi All:
>> >
>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>> all)
>> > jobs from hive, we get the following exception info , seems the physical
>> > memory and virtual memory both not enough for the job to run:
>> >
>> >
>> > Task with the most failures(4):
>> > -----
>> > Task ID:
>> >   task_1386156666044_0001_m_000000
>> >
>> > URL:
>> >
>> >
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> > -----
>> > Diagnostic Messages for this Task:
>> > Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> > container.
>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>> >
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> > -Dlog4j.configuration=container-log4j.properties
>> >
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> > -Dyarn.app.mapreduce.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA
>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> > attempt_1386156666044_0001_m_000000_3 13
>> >
>> > following is some of our configuration:
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>> >     <value>12288</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>> >     <value>8</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>> >     <value>false</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>> >     <value>6</value>
>> >   </property>
>> >
>> > can you give me some advice? thanks a lot.
>> > --
>> > 不学习,不知道
>> >
>> >
>> >
>> > --
>> > 不学习,不知道
>>
>>
>>
>> --
>> - Tsuyoshi
>>
>
>
>
> --
> 不学习,不知道
>

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
we have already tried several values of these two parameters, but it seems
no use.


2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>

> Hi,
>
> Please check the properties like mapreduce.reduce.memory.mb and
> mapredce.map.memory.mb in mapred-site.xml. These properties decide
> resource limits for mappers/reducers.
>
> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
> >
> >
> > ---------- Forwarded message ----------
> > From: panfei <cn...@gmail.com>
> > Date: 2013/12/4
> > Subject: Container
> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory
> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
> > To: CDH Users <cd...@cloudera.org>
> >
> >
> > Hi All:
> >
> > We are using CDH4.5 Hadoop for our production, when submit some (not all)
> > jobs from hive, we get the following exception info , seems the physical
> > memory and virtual memory both not enough for the job to run:
> >
> >
> > Task with the most failures(4):
> > -----
> > Task ID:
> >   task_1386156666044_0001_m_000000
> >
> > URL:
> >
> >
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> > -----
> > Diagnostic Messages for this Task:
> > Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is
> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> > container.
> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> > -Dhadoop.metrics.log.level=WARN -Xmx200m
> >
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> > -Dlog4j.configuration=container-log4j.properties
> >
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> > -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> > attempt_1386156666044_0001_m_000000_3 13
> >
> > following is some of our configuration:
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.memory-mb</name>
> >     <value>12288</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
> >     <value>8</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-check-enabled</name>
> >     <value>false</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
> >     <value>6</value>
> >   </property>
> >
> > can you give me some advice? thanks a lot.
> > --
> > 不学习,不知道
> >
> >
> >
> > --
> > 不学习,不知道
>
>
>
> --
> - Tsuyoshi
>



-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
we have already tried several values of these two parameters, but it seems
no use.


2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>

> Hi,
>
> Please check the properties like mapreduce.reduce.memory.mb and
> mapredce.map.memory.mb in mapred-site.xml. These properties decide
> resource limits for mappers/reducers.
>
> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
> >
> >
> > ---------- Forwarded message ----------
> > From: panfei <cn...@gmail.com>
> > Date: 2013/12/4
> > Subject: Container
> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory
> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
> > To: CDH Users <cd...@cloudera.org>
> >
> >
> > Hi All:
> >
> > We are using CDH4.5 Hadoop for our production, when submit some (not all)
> > jobs from hive, we get the following exception info , seems the physical
> > memory and virtual memory both not enough for the job to run:
> >
> >
> > Task with the most failures(4):
> > -----
> > Task ID:
> >   task_1386156666044_0001_m_000000
> >
> > URL:
> >
> >
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> > -----
> > Diagnostic Messages for this Task:
> > Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is
> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> > container.
> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> > -Dhadoop.metrics.log.level=WARN -Xmx200m
> >
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> > -Dlog4j.configuration=container-log4j.properties
> >
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> > -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> > attempt_1386156666044_0001_m_000000_3 13
> >
> > following is some of our configuration:
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.memory-mb</name>
> >     <value>12288</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
> >     <value>8</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-check-enabled</name>
> >     <value>false</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
> >     <value>6</value>
> >   </property>
> >
> > can you give me some advice? thanks a lot.
> > --
> > 不学习,不知道
> >
> >
> >
> > --
> > 不学习,不知道
>
>
>
> --
> - Tsuyoshi
>



-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
we have already tried several values of these two parameters, but it seems
no use.


2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>

> Hi,
>
> Please check the properties like mapreduce.reduce.memory.mb and
> mapredce.map.memory.mb in mapred-site.xml. These properties decide
> resource limits for mappers/reducers.
>
> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
> >
> >
> > ---------- Forwarded message ----------
> > From: panfei <cn...@gmail.com>
> > Date: 2013/12/4
> > Subject: Container
> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory
> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
> > To: CDH Users <cd...@cloudera.org>
> >
> >
> > Hi All:
> >
> > We are using CDH4.5 Hadoop for our production, when submit some (not all)
> > jobs from hive, we get the following exception info , seems the physical
> > memory and virtual memory both not enough for the job to run:
> >
> >
> > Task with the most failures(4):
> > -----
> > Task ID:
> >   task_1386156666044_0001_m_000000
> >
> > URL:
> >
> >
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> > -----
> > Diagnostic Messages for this Task:
> > Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is
> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> > container.
> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> > -Dhadoop.metrics.log.level=WARN -Xmx200m
> >
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> > -Dlog4j.configuration=container-log4j.properties
> >
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> > -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> > attempt_1386156666044_0001_m_000000_3 13
> >
> > following is some of our configuration:
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.memory-mb</name>
> >     <value>12288</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
> >     <value>8</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-check-enabled</name>
> >     <value>false</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
> >     <value>6</value>
> >   </property>
> >
> > can you give me some advice? thanks a lot.
> > --
> > 不学习,不知道
> >
> >
> >
> > --
> > 不学习,不知道
>
>
>
> --
> - Tsuyoshi
>



-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by panfei <cn...@gmail.com>.
we have already tried several values of these two parameters, but it seems
no use.


2013/12/5 Tsuyoshi OZAWA <oz...@gmail.com>

> Hi,
>
> Please check the properties like mapreduce.reduce.memory.mb and
> mapredce.map.memory.mb in mapred-site.xml. These properties decide
> resource limits for mappers/reducers.
>
> On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
> >
> >
> > ---------- Forwarded message ----------
> > From: panfei <cn...@gmail.com>
> > Date: 2013/12/4
> > Subject: Container
> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory
> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
> > To: CDH Users <cd...@cloudera.org>
> >
> >
> > Hi All:
> >
> > We are using CDH4.5 Hadoop for our production, when submit some (not all)
> > jobs from hive, we get the following exception info , seems the physical
> > memory and virtual memory both not enough for the job to run:
> >
> >
> > Task with the most failures(4):
> > -----
> > Task ID:
> >   task_1386156666044_0001_m_000000
> >
> > URL:
> >
> >
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> > -----
> > Diagnostic Messages for this Task:
> > Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is
> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> > container.
> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> > -Dhadoop.metrics.log.level=WARN -Xmx200m
> >
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> > -Dlog4j.configuration=container-log4j.properties
> >
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> > -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> > attempt_1386156666044_0001_m_000000_3 13
> >
> > following is some of our configuration:
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.memory-mb</name>
> >     <value>12288</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
> >     <value>8</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.vmem-check-enabled</name>
> >     <value>false</value>
> >   </property>
> >
> >   <property>
> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
> >     <value>6</value>
> >   </property>
> >
> > can you give me some advice? thanks a lot.
> > --
> > 不学习,不知道
> >
> >
> >
> > --
> > 不学习,不知道
>
>
>
> --
> - Tsuyoshi
>



-- 
不学习,不知道

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Please check the properties like mapreduce.reduce.memory.mb and
mapredce.map.memory.mb in mapred-site.xml. These properties decide
resource limits for mappers/reducers.

On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory
> used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道



-- 
- Tsuyoshi

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Please check the properties like mapreduce.reduce.memory.mb and
mapredce.map.memory.mb in mapred-site.xml. These properties decide
resource limits for mappers/reducers.

On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory
> used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道



-- 
- Tsuyoshi

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Please check the properties like mapreduce.reduce.memory.mb and
mapredce.map.memory.mb in mapred-site.xml. These properties decide
resource limits for mappers/reducers.

On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory
> used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道



-- 
- Tsuyoshi

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Something looks really bad on your cluster. The JVM's heap size is 200MB
but its virtual memory has ballooned to a monstrous 332GB. Does that ring
any bell? Can you run regular java applications on this node? This doesn't
seem related to YARN per-se.

+Vinod
Hortonworks Inc.
http://hortonworks.com/


On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:

>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Something looks really bad on your cluster. The JVM's heap size is 200MB
but its virtual memory has ballooned to a monstrous 332GB. Does that ring
any bell? Can you run regular java applications on this node? This doesn't
seem related to YARN per-se.

+Vinod
Hortonworks Inc.
http://hortonworks.com/


On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:

>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Please check the properties like mapreduce.reduce.memory.mb and
mapredce.map.memory.mb in mapred-site.xml. These properties decide
resource limits for mappers/reducers.

On Wed, Dec 4, 2013 at 10:16 PM, panfei <cn...@gmail.com> wrote:
>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory
> used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道



-- 
- Tsuyoshi

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing container.

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Something looks really bad on your cluster. The JVM's heap size is 200MB
but its virtual memory has ballooned to a monstrous 332GB. Does that ring
any bell? Can you run regular java applications on this node? This doesn't
seem related to YARN per-se.

+Vinod
Hortonworks Inc.
http://hortonworks.com/


On Wed, Dec 4, 2013 at 5:16 AM, panfei <cn...@gmail.com> wrote:

>
>
> ---------- Forwarded message ----------
> From: panfei <cn...@gmail.com>
> Date: 2013/12/4
> Subject: Container
> [pid=22885,containerID=container_1386156666044_0001_01_000013] is running
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
> memory used; 332.5 GB of 8 GB virtual memory used. Killing container.
> To: CDH Users <cd...@cloudera.org>
>
>
> Hi All:
>
> We are using CDH4.5 Hadoop for our production, when submit some (not all)
> jobs from hive, we get the following exception info , seems the physical
> memory and virtual memory both not enough for the job to run:
>
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1386156666044_0001_m_000000
>
> URL:
>
> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
> -----
> Diagnostic Messages for this Task:
> Container [pid=22885,containerID=container_1386156666044_0001_01_000013]
> is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1386156666044_0001_01_000013 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
> /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
> attempt_1386156666044_0001_m_000000_3 13
>
> following is some of our configuration:
>
>   <property>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>     <value>12288</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <value>8</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.vmem-check-enabled</name>
>     <value>false</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.resource.cpu-vcores</name>
>     <value>6</value>
>   </property>
>
> can you give me some advice? thanks a lot.
> --
> 不学习,不知道
>
>
>
> --
> 不学习,不知道
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.