You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by adu <du...@hzduozhun.com> on 2014/10/08 06:19:24 UTC

hadoop distcp container killed

hi all,

I use distcp to copy a large file (20g+) in two clusters. I got the error.

Container [pid=12876,containerID=container_1411625661257_0156_01_000002]
is running beyond virtual memory limits. Current usage: 157.6 MB of 1 GB
physical memory used; 12.1 GB of 10 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1411625661257_0156_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12881 12876 12876 12876 (java) 891 65 12923498496 40050
/usr/java/latest/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m
-Djava.io.tmpdir=/alidata1/data/
hdfs/node_manager/usercache/root/appcache/application_1411625661257_0156/container_1411625661257_0156_01_000002/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/
alidata1/data/hdfs/nodemanager_log/application_1411625661257_0156/container_1411625661257_0156_01_000002
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred. YarnChild 10.162.39.154 49876
attempt_1411625661257_0156_m_000000_0 2
|- 12876 15020 12876 12876 (bash) 0 0 108646400 299 /bin/bash -c
/usr/java/latest/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx2048m -Djava.io.tmpdir=/alidata1/
data/hdfs/node_manager/usercache/root/appcache/application_1411625661257_0156/container_1411625661257_0156_01_000002/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/
alidata1/data/hdfs/nodemanager_log/application_1411625661257_0156/container_1411625661257_0156_01_000002
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred. YarnChild 10.162.39.154 49876
attempt_1411625661257_0156_m_000000_0 2
1>/alidata1/data/hdfs/nodemanager_log/application_1411625661257_0156/container_1411625661257_0156_01_000002/stdout
2>/alidata1/
data/hdfs/nodemanager_log/application_1411625661257_0156/container_1411625661257_0156_01_000002/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143


The source hadoop version is 2.2.0 . and the destination is 2.4.1. I
test -m (50. 100. 100) to increase the number of map task. But it
doesn't work. It seems the file isn't spited into small parts.

I need help , Thanks.