You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by José Luis Larroque <la...@gmail.com> on 2016/01/16 19:07:19 UTC

Can't run hadoop examples with YARN Single node cluster

Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
and for some reason, i can't execute even a example that comes with map
reduce (grep, wordcount, etc). With this line i execute grep:

    $HADOOP_HOME/bin/yarn jar
/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
grep input output2 'dfs[a-z.]+'

This cluster was previosly running Giraph programs, but rigth now i need a
Map Reduce application, so i switched it back to pure yarn.

All failed containers had the same error:

    Container: container_1452447718890_0001_01_000002 on localhost_37976
    ======================================================================
    LogType: stderr
    LogLength: 45
    Log Contents:
*    Error: Could not find or load main class 256*

Main logs:

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
    16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
    16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
    16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
process : 1
    16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
    16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1452905418747_0001
    16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
adding any jar to the list of resources.
    16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
application_1452905418747_0001
    16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
http://localhost:8088/proxy/application_1452905418747_0001/
    16/01/15 21:53:54 INFO mapreduce.Job: Running job:
job_1452905418747_0001
    16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
running in uber mode : false
    16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
    16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_0, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_1, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_2, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
    16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
with state FAILED due to: Task failed task_1452905418747_0001_m_000000
    Job failed as tasks failed. failedMaps:1 failedReduces:0

    16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
        Job Counters
            Failed map tasks=4
            Launched map tasks=4
            Other local map tasks=3
            Data-local map tasks=1
            Total time spent by all maps in occupied slots (ms)=15548
            Total time spent by all reduces in occupied slots (ms)=0
            Total time spent by all map tasks (ms)=7774
            Total vcore-seconds taken by all map tasks=7774
            Total megabyte-seconds taken by all map tasks=3980288
        Map-Reduce Framework
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0
    16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
    16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
    16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
process : 0
    16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
    16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1452905418747_0002
    16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
adding any jar to the list of resources.
    16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
application_1452905418747_0002
    16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
http://localhost:8088/proxy/application_1452905418747_0002/
    16/01/15 21:54:22 INFO mapreduce.Job: Running job:
job_1452905418747_0002
    16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
running in uber mode : false
    16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
    16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_0, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_1, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_2, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
    16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
with state FAILED due to: Task failed task_1452905418747_0002_r_000000
    Job failed as tasks failed. failedMaps:0 failedReduces:1

    16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
        Job Counters
            Failed reduce tasks=4
            Launched reduce tasks=4
            Total time spent by all maps in occupied slots (ms)=0
            Total time spent by all reduces in occupied slots (ms)=11882
            Total time spent by all reduce tasks (ms)=5941
            Total vcore-seconds taken by all reduce tasks=5941
            Total megabyte-seconds taken by all reduce tasks=3041792
        Map-Reduce Framework
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0

I switched mapreduce.framework.name from:

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

To:

<property>
<name>mapreduce.framework.name</name>
<value>local</value>
</property>

and grep and other mapreduce jobs are working again.

I don't understand why with *"yarn"* value in *mapreduce.framework.name
<http://mapreduce.framework.name>* doesn't work, and without it (using
"local") does.

Any idea how to fix this without switching the value of
mapreduce.framework.name?


Bye!
Jose

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Heh, good job and thank you very much for posting the solution here, not
many people do that :)

I don't have the feeling I helped much, but finding a solution is what
*counts*, not only *words* :D

Regards,
LLoyd

On 7 March 2016 at 22:50, José Luis Larroque <la...@gmail.com> wrote:

> Hi again guys, i could, finally, find what the issue was!!!
>
> This is my mapred-site.xml, here it's the problem :
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <!--
> <value>local</local> Para debug
> <value>hdnode01:54311</value> Para cosas posta
> -->
> <value>hdnode01:54311</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.map.tasks.maximum</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.job.maps</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> <property>
> <name>mapreduce.map.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.map.java.opts</name>
> <value>256</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.java.opts</name>
> <value>256</value>
> </property>
> <configuration>
>
> If i suppress the last two properties ( mapreduce.map.java.opts ,
> mapreduce.reduce.java.opts ), wordcount works!
>
> I remember putting those last two properties for a memory issue of some
> kind, but maybe for some reason they clash with the others two (
> mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?
>
> It will be great if someone can give me a short explanation in order to
> understand better the memory management of a YARN cluster.
>
>
> PD: Thanks again Namikaze and Gaurav for their help!!
>
> Bye!
> Jose
>
> 2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:
>
>> Thanks Namikaze for keep trying, don't give up!! :D
>>
>> - I have these lines in *$HOME/.bashrc*
>>
>>
>> export HADOOP_PREFIX=/usr/local/hadoop
>>
>> # Others variables
>>
>> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>>
>>
>>   - in *hadoop-env.sh* i have:
>>
>> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>>
>>
>>   - I read that SO question and all answers to it. The only useful answer
>> in my opinion was checking yarn classpath. I have three times the following
>> line:
>>
>> /usr/local/hadoop/etc/hadoop:
>>
>>
>> I put yarn.application.classpath on yarn-site.xml because i don't know
>> any other way to fix it, with the value recomended for default in this
>> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
>> (see for yarn.application.classpath):
>>
>>
>> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
>> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
>> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
>> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>>
>>
>> But the classpath remains the same. And i can't find any other way to fix
>> it. Maybe this is the problem?
>>
>>
>>  - yarn.log-aggregation-enable was always set to true. I couldn't find
>> nothing in *datanodes logs*, here they are:
>>
>> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.4.0
>> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
>> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
>> STARTUP_MSG:   java = 1.7.0_79
>> ************************************************************/
>> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
>> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
>> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
>> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
>> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
>> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
>> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
>> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
>> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
>> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
>> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
>> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
>> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
>> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
>> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
>> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
>> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
>> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
>> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
>> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
>> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
>> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
>> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
>> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
>> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
>> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
>> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
>> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
>> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
>> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
>> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
>> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
>> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
>> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
>> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
>> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
>> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
>> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
>> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
>> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
>> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
>> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
>> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
>> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
>> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
>> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
>> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
>> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
>> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
>> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
>> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
>> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
>> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
>> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
>> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
>> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>>
>>
>>
>>
>> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>>> It could be a classpath issue (see
>>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>>> this is the case.
>>>
>>> You could drill down to the exact root cause by looking at the
>>> datanode logs (see
>>>
>>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>>> )
>>> But I'm not sure we would get another error than what we had...
>>>
>>> Check if your application has the correct values for the following
>>> variables:
>>> HADOOP_CONF_DIR
>>> HADOOP_COMMON_HOME
>>> HADOOP_HDFS_HOME
>>> HADOOP_MAPRED_HOME
>>> HADOOP_YARN_HOME
>>>
>>> I'm afraid I can't help you much more than this myself, sorry...
>>>
>>> LLoyd
>>>
>>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>>> wrote:
>>> > Hi guys, thanks for your answers.
>>> >
>>> > Wordcount logs:
>>> >
>>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>>> > hdnode01/192.168.0.10:8050
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop
>>> > library for your platform... using builtin-java classes where
>>> applicable
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 929
>>> > Log Contents:
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > log4j:WARN No appenders could be found for logger
>>> > (org.apache.hadoop.ipc.Server).
>>> > log4j:WARN Please initialize the log4j system properly.
>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for
>>> > more info.
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> > LogType: syslog
>>> > LogLength: 56780
>>> > Log Contents:
>>> > 2016-01-19 20:04:11,329 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>>> > application appattempt_1453244277886_0001_000001
>>> > 2016-01-19 20:04:11,657 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,674 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:11,765 WARN [main]
>>> org.apache.hadoop.util.NativeCodeLoader:
>>> > Unable to load native-hadoop library for your platform... using
>>> builtin-java
>>> > classes where applicable
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>>> > Service: , Ident:
>>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>>> > 2016-01-19 20:04:11,801 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>>> attempts: 2
>>> > for application: 1. Attempt num: 1 is last retry: false
>>> > 2016-01-19 20:04:11,806 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>>> > newApiCommitter.
>>> > 2016-01-19 20:04:11,934 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,939 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,948 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,953 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:12,464 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>>> > config null
>>> > 2016-01-19 20:04:12,526 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>>> > 2016-01-19 20:04:12,548 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> > 2016-01-19 20:04:12,549 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>>> > 2016-01-19 20:04:12,550 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>>> > 2016-01-19 20:04:12,551 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>>> > 2016-01-19 20:04:12,552 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>>> > 2016-01-19 20:04:12,557 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>>> > 2016-01-19 20:04:12,558 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>>> > 2016-01-19 20:04:12,559 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
>>> > class
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>>> after
>>> > creating 488, Expected: 504
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> Explicitly
>>> > setting permissions to : 504, rwxrwx---
>>> > 2016-01-19 20:04:12,731 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>>> > 2016-01-19 20:04:12,956 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> > hadoop-metrics2.properties
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period
>>> > at 10 second(s).
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>>> > system started
>>> > 2016-01-19 20:04:13,026 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>>> for
>>> > job_1453244277886_0001 to jobTokenSecretManager
>>> > 2016-01-19 20:04:13,139 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>>> > job_1453244277886_0001 because: not enabled;
>>> > 2016-01-19 20:04:13,154 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>>> > job_1453244277886_0001 = 343691. Number of splits = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>>> for
>>> > job job_1453244277886_0001 = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>>> > 2016-01-19 20:04:13,157 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>>> > 2016-01-19 20:04:13,186 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>>> > 2016-01-19 20:04:13,237 INFO [main]
>>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>>> server
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>>> > 2016-01-19 20:04:13,239 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> > org.mortbay.log.Slf4jLog
>>> > 2016-01-19 20:04:13,304 INFO [main]
>>> org.apache.hadoop.http.HttpRequestLog:
>>> > Http request log for http.requests.mapreduce is not defined
>>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added global filter 'safety'
>>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context mapreduce
>>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context static
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /mapreduce/*
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /ws/*
>>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Jetty bound to port 44070
>>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>>> >
>>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>>> > SelectChannelConnector@0.0.0.0:44070
>>> > 2016-01-19 20:04:13,647 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Web app /mapreduce started at 44070
>>> > 2016-01-19 20:04:13,956 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Registered webapp guice modules
>>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> JOB_CREATE
>>> > job_1453244277886_0001
>>> > 2016-01-19 20:04:13,961 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > nodeBlacklistingEnabled:true
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > maxTaskFailuresPerNode is 3
>>> > 2016-01-19 20:04:13,988 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > blacklistDisablePercent is 33
>>> > 2016-01-19 20:04:14,052 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,054 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:14,057 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,059 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:14,062 INFO [main]
>>> org.apache.hadoop.yarn.client.RMProxy:
>>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > maxContainerCapability: 2000
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>>> default
>>> > 2016-01-19 20:04:14,162 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Upper
>>> > limit on the thread pool size is 500
>>> > 2016-01-19 20:04:14,164 INFO [main]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > yarn.client.max-nodemanagers-proxies : 500
>>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_SETUP
>>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > mapResourceReqt:512
>>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > reduceResourceReqt:512
>>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>>> Writer
>>> > setup for JobId: job_1453244277886_0001, File:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>>> > HostLocal:0 RackLocal:0
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=1280
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000002 to
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-jar
>>> > file on the remote FS is
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-conf
>>> > file on the remote FS is
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>>> > tokens and #1 secret keys for NM use for launching container
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>>> > containertokens_dob is 1
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>>> shuffle
>>> > token in serviceData
>>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > Opening proxy : localhost:35711
>>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_0
>>> > : 13562
>>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>>> RUNNING
>>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000002
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000003,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000003 to
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_1
>>> > : 13562
>>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000003
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000004,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000004 to
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_2
>>> > : 13562
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000004
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> Blacklisted host
>>> > localhost
>>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>>> > blacklistRemovals=0
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>>> > blacklistRemovals=1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000005,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000005 to
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_3
>>> > : 13562
>>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000005
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>>> FAILED
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>>> Tasks: 1
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as
>>> tasks
>>> > failed. failedMaps:1 failedReduces:0
>>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>>> > KILL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to KILLED
>>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>>> the
>>> > event EventType: CONTAINER_DEALLOCATE
>>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>>> > deallocate container for task attemptId
>>> > attempt_1453244277886_0001_r_000000_0
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>>> KILLED
>>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_ABORT
>>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>>> cleanly so
>>> > this is the last retry
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>>> > isAMLastRetry: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> RMCommunicator
>>> > notified that shouldUnregistered is: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>>> isAMLastRetry:
>>> > true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> > JobHistoryEventHandler notified that forceJobCompletion is true
>>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all
>>> the
>>> > services
>>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold reached. Scheduling reduces.
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>>> > assigned. Ramping up all remaining reduces:1
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>>> > JobHistoryEventHandler. super.stop()
>>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>>> >
>>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History
>>> url is
>>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>>> > application to be successfully unregistered.
>>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final
>>> Stats:
>>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>>> AssignedReds:0
>>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>>> > RackLocal:0
>>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>>> directory
>>> > hdfs://hdnode01:54310
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>>> > Stopping server on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>>> > TaskHeartbeatHandler thread interrupted
>>> >
>>> >
>>> > Jps results, i believe that everything is ok, right?:
>>> > 21267 DataNode
>>> > 21609 ResourceManager
>>> > 21974 JobHistoryServer
>>> > 21735 NodeManager
>>> > 24546 Jps
>>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>>> > 21121 NameNode
>>> > 22098 QuorumPeerMain
>>> > 21456 SecondaryNameNode
>>> >
>>> >
>>>
>>
>>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Heh, good job and thank you very much for posting the solution here, not
many people do that :)

I don't have the feeling I helped much, but finding a solution is what
*counts*, not only *words* :D

Regards,
LLoyd

On 7 March 2016 at 22:50, José Luis Larroque <la...@gmail.com> wrote:

> Hi again guys, i could, finally, find what the issue was!!!
>
> This is my mapred-site.xml, here it's the problem :
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <!--
> <value>local</local> Para debug
> <value>hdnode01:54311</value> Para cosas posta
> -->
> <value>hdnode01:54311</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.map.tasks.maximum</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.job.maps</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> <property>
> <name>mapreduce.map.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.map.java.opts</name>
> <value>256</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.java.opts</name>
> <value>256</value>
> </property>
> <configuration>
>
> If i suppress the last two properties ( mapreduce.map.java.opts ,
> mapreduce.reduce.java.opts ), wordcount works!
>
> I remember putting those last two properties for a memory issue of some
> kind, but maybe for some reason they clash with the others two (
> mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?
>
> It will be great if someone can give me a short explanation in order to
> understand better the memory management of a YARN cluster.
>
>
> PD: Thanks again Namikaze and Gaurav for their help!!
>
> Bye!
> Jose
>
> 2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:
>
>> Thanks Namikaze for keep trying, don't give up!! :D
>>
>> - I have these lines in *$HOME/.bashrc*
>>
>>
>> export HADOOP_PREFIX=/usr/local/hadoop
>>
>> # Others variables
>>
>> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>>
>>
>>   - in *hadoop-env.sh* i have:
>>
>> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>>
>>
>>   - I read that SO question and all answers to it. The only useful answer
>> in my opinion was checking yarn classpath. I have three times the following
>> line:
>>
>> /usr/local/hadoop/etc/hadoop:
>>
>>
>> I put yarn.application.classpath on yarn-site.xml because i don't know
>> any other way to fix it, with the value recomended for default in this
>> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
>> (see for yarn.application.classpath):
>>
>>
>> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
>> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
>> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
>> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>>
>>
>> But the classpath remains the same. And i can't find any other way to fix
>> it. Maybe this is the problem?
>>
>>
>>  - yarn.log-aggregation-enable was always set to true. I couldn't find
>> nothing in *datanodes logs*, here they are:
>>
>> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.4.0
>> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
>> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
>> STARTUP_MSG:   java = 1.7.0_79
>> ************************************************************/
>> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
>> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
>> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
>> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
>> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
>> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
>> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
>> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
>> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
>> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
>> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
>> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
>> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
>> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
>> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
>> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
>> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
>> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
>> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
>> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
>> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
>> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
>> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
>> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
>> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
>> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
>> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
>> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
>> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
>> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
>> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
>> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
>> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
>> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
>> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
>> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
>> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
>> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
>> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
>> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
>> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
>> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
>> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
>> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
>> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
>> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
>> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
>> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
>> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
>> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
>> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
>> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
>> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
>> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
>> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
>> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>>
>>
>>
>>
>> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>>> It could be a classpath issue (see
>>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>>> this is the case.
>>>
>>> You could drill down to the exact root cause by looking at the
>>> datanode logs (see
>>>
>>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>>> )
>>> But I'm not sure we would get another error than what we had...
>>>
>>> Check if your application has the correct values for the following
>>> variables:
>>> HADOOP_CONF_DIR
>>> HADOOP_COMMON_HOME
>>> HADOOP_HDFS_HOME
>>> HADOOP_MAPRED_HOME
>>> HADOOP_YARN_HOME
>>>
>>> I'm afraid I can't help you much more than this myself, sorry...
>>>
>>> LLoyd
>>>
>>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>>> wrote:
>>> > Hi guys, thanks for your answers.
>>> >
>>> > Wordcount logs:
>>> >
>>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>>> > hdnode01/192.168.0.10:8050
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop
>>> > library for your platform... using builtin-java classes where
>>> applicable
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 929
>>> > Log Contents:
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > log4j:WARN No appenders could be found for logger
>>> > (org.apache.hadoop.ipc.Server).
>>> > log4j:WARN Please initialize the log4j system properly.
>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for
>>> > more info.
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> > LogType: syslog
>>> > LogLength: 56780
>>> > Log Contents:
>>> > 2016-01-19 20:04:11,329 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>>> > application appattempt_1453244277886_0001_000001
>>> > 2016-01-19 20:04:11,657 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,674 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:11,765 WARN [main]
>>> org.apache.hadoop.util.NativeCodeLoader:
>>> > Unable to load native-hadoop library for your platform... using
>>> builtin-java
>>> > classes where applicable
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>>> > Service: , Ident:
>>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>>> > 2016-01-19 20:04:11,801 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>>> attempts: 2
>>> > for application: 1. Attempt num: 1 is last retry: false
>>> > 2016-01-19 20:04:11,806 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>>> > newApiCommitter.
>>> > 2016-01-19 20:04:11,934 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,939 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,948 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,953 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:12,464 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>>> > config null
>>> > 2016-01-19 20:04:12,526 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>>> > 2016-01-19 20:04:12,548 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> > 2016-01-19 20:04:12,549 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>>> > 2016-01-19 20:04:12,550 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>>> > 2016-01-19 20:04:12,551 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>>> > 2016-01-19 20:04:12,552 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>>> > 2016-01-19 20:04:12,557 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>>> > 2016-01-19 20:04:12,558 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>>> > 2016-01-19 20:04:12,559 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
>>> > class
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>>> after
>>> > creating 488, Expected: 504
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> Explicitly
>>> > setting permissions to : 504, rwxrwx---
>>> > 2016-01-19 20:04:12,731 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>>> > 2016-01-19 20:04:12,956 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> > hadoop-metrics2.properties
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period
>>> > at 10 second(s).
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>>> > system started
>>> > 2016-01-19 20:04:13,026 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>>> for
>>> > job_1453244277886_0001 to jobTokenSecretManager
>>> > 2016-01-19 20:04:13,139 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>>> > job_1453244277886_0001 because: not enabled;
>>> > 2016-01-19 20:04:13,154 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>>> > job_1453244277886_0001 = 343691. Number of splits = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>>> for
>>> > job job_1453244277886_0001 = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>>> > 2016-01-19 20:04:13,157 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>>> > 2016-01-19 20:04:13,186 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>>> > 2016-01-19 20:04:13,237 INFO [main]
>>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>>> server
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>>> > 2016-01-19 20:04:13,239 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> > org.mortbay.log.Slf4jLog
>>> > 2016-01-19 20:04:13,304 INFO [main]
>>> org.apache.hadoop.http.HttpRequestLog:
>>> > Http request log for http.requests.mapreduce is not defined
>>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added global filter 'safety'
>>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context mapreduce
>>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context static
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /mapreduce/*
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /ws/*
>>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Jetty bound to port 44070
>>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>>> >
>>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>>> > SelectChannelConnector@0.0.0.0:44070
>>> > 2016-01-19 20:04:13,647 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Web app /mapreduce started at 44070
>>> > 2016-01-19 20:04:13,956 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Registered webapp guice modules
>>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> JOB_CREATE
>>> > job_1453244277886_0001
>>> > 2016-01-19 20:04:13,961 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > nodeBlacklistingEnabled:true
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > maxTaskFailuresPerNode is 3
>>> > 2016-01-19 20:04:13,988 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > blacklistDisablePercent is 33
>>> > 2016-01-19 20:04:14,052 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,054 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:14,057 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,059 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:14,062 INFO [main]
>>> org.apache.hadoop.yarn.client.RMProxy:
>>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > maxContainerCapability: 2000
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>>> default
>>> > 2016-01-19 20:04:14,162 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Upper
>>> > limit on the thread pool size is 500
>>> > 2016-01-19 20:04:14,164 INFO [main]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > yarn.client.max-nodemanagers-proxies : 500
>>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_SETUP
>>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > mapResourceReqt:512
>>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > reduceResourceReqt:512
>>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>>> Writer
>>> > setup for JobId: job_1453244277886_0001, File:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>>> > HostLocal:0 RackLocal:0
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=1280
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000002 to
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-jar
>>> > file on the remote FS is
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-conf
>>> > file on the remote FS is
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>>> > tokens and #1 secret keys for NM use for launching container
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>>> > containertokens_dob is 1
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>>> shuffle
>>> > token in serviceData
>>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > Opening proxy : localhost:35711
>>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_0
>>> > : 13562
>>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>>> RUNNING
>>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000002
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000003,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000003 to
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_1
>>> > : 13562
>>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000003
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000004,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000004 to
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_2
>>> > : 13562
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000004
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> Blacklisted host
>>> > localhost
>>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>>> > blacklistRemovals=0
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>>> > blacklistRemovals=1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000005,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000005 to
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_3
>>> > : 13562
>>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000005
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>>> FAILED
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>>> Tasks: 1
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as
>>> tasks
>>> > failed. failedMaps:1 failedReduces:0
>>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>>> > KILL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to KILLED
>>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>>> the
>>> > event EventType: CONTAINER_DEALLOCATE
>>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>>> > deallocate container for task attemptId
>>> > attempt_1453244277886_0001_r_000000_0
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>>> KILLED
>>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_ABORT
>>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>>> cleanly so
>>> > this is the last retry
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>>> > isAMLastRetry: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> RMCommunicator
>>> > notified that shouldUnregistered is: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>>> isAMLastRetry:
>>> > true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> > JobHistoryEventHandler notified that forceJobCompletion is true
>>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all
>>> the
>>> > services
>>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold reached. Scheduling reduces.
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>>> > assigned. Ramping up all remaining reduces:1
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>>> > JobHistoryEventHandler. super.stop()
>>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>>> >
>>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History
>>> url is
>>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>>> > application to be successfully unregistered.
>>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final
>>> Stats:
>>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>>> AssignedReds:0
>>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>>> > RackLocal:0
>>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>>> directory
>>> > hdfs://hdnode01:54310
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>>> > Stopping server on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>>> > TaskHeartbeatHandler thread interrupted
>>> >
>>> >
>>> > Jps results, i believe that everything is ok, right?:
>>> > 21267 DataNode
>>> > 21609 ResourceManager
>>> > 21974 JobHistoryServer
>>> > 21735 NodeManager
>>> > 24546 Jps
>>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>>> > 21121 NameNode
>>> > 22098 QuorumPeerMain
>>> > 21456 SecondaryNameNode
>>> >
>>> >
>>>
>>
>>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Heh, good job and thank you very much for posting the solution here, not
many people do that :)

I don't have the feeling I helped much, but finding a solution is what
*counts*, not only *words* :D

Regards,
LLoyd

On 7 March 2016 at 22:50, José Luis Larroque <la...@gmail.com> wrote:

> Hi again guys, i could, finally, find what the issue was!!!
>
> This is my mapred-site.xml, here it's the problem :
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <!--
> <value>local</local> Para debug
> <value>hdnode01:54311</value> Para cosas posta
> -->
> <value>hdnode01:54311</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.map.tasks.maximum</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.job.maps</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> <property>
> <name>mapreduce.map.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.map.java.opts</name>
> <value>256</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.java.opts</name>
> <value>256</value>
> </property>
> <configuration>
>
> If i suppress the last two properties ( mapreduce.map.java.opts ,
> mapreduce.reduce.java.opts ), wordcount works!
>
> I remember putting those last two properties for a memory issue of some
> kind, but maybe for some reason they clash with the others two (
> mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?
>
> It will be great if someone can give me a short explanation in order to
> understand better the memory management of a YARN cluster.
>
>
> PD: Thanks again Namikaze and Gaurav for their help!!
>
> Bye!
> Jose
>
> 2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:
>
>> Thanks Namikaze for keep trying, don't give up!! :D
>>
>> - I have these lines in *$HOME/.bashrc*
>>
>>
>> export HADOOP_PREFIX=/usr/local/hadoop
>>
>> # Others variables
>>
>> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>>
>>
>>   - in *hadoop-env.sh* i have:
>>
>> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>>
>>
>>   - I read that SO question and all answers to it. The only useful answer
>> in my opinion was checking yarn classpath. I have three times the following
>> line:
>>
>> /usr/local/hadoop/etc/hadoop:
>>
>>
>> I put yarn.application.classpath on yarn-site.xml because i don't know
>> any other way to fix it, with the value recomended for default in this
>> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
>> (see for yarn.application.classpath):
>>
>>
>> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
>> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
>> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
>> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>>
>>
>> But the classpath remains the same. And i can't find any other way to fix
>> it. Maybe this is the problem?
>>
>>
>>  - yarn.log-aggregation-enable was always set to true. I couldn't find
>> nothing in *datanodes logs*, here they are:
>>
>> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.4.0
>> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
>> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
>> STARTUP_MSG:   java = 1.7.0_79
>> ************************************************************/
>> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
>> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
>> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
>> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
>> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
>> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
>> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
>> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
>> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
>> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
>> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
>> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
>> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
>> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
>> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
>> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
>> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
>> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
>> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
>> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
>> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
>> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
>> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
>> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
>> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
>> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
>> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
>> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
>> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
>> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
>> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
>> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
>> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
>> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
>> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
>> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
>> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
>> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
>> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
>> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
>> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
>> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
>> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
>> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
>> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
>> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
>> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
>> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
>> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
>> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
>> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
>> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
>> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
>> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
>> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
>> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>>
>>
>>
>>
>> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>>> It could be a classpath issue (see
>>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>>> this is the case.
>>>
>>> You could drill down to the exact root cause by looking at the
>>> datanode logs (see
>>>
>>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>>> )
>>> But I'm not sure we would get another error than what we had...
>>>
>>> Check if your application has the correct values for the following
>>> variables:
>>> HADOOP_CONF_DIR
>>> HADOOP_COMMON_HOME
>>> HADOOP_HDFS_HOME
>>> HADOOP_MAPRED_HOME
>>> HADOOP_YARN_HOME
>>>
>>> I'm afraid I can't help you much more than this myself, sorry...
>>>
>>> LLoyd
>>>
>>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>>> wrote:
>>> > Hi guys, thanks for your answers.
>>> >
>>> > Wordcount logs:
>>> >
>>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>>> > hdnode01/192.168.0.10:8050
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop
>>> > library for your platform... using builtin-java classes where
>>> applicable
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 929
>>> > Log Contents:
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > log4j:WARN No appenders could be found for logger
>>> > (org.apache.hadoop.ipc.Server).
>>> > log4j:WARN Please initialize the log4j system properly.
>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for
>>> > more info.
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> > LogType: syslog
>>> > LogLength: 56780
>>> > Log Contents:
>>> > 2016-01-19 20:04:11,329 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>>> > application appattempt_1453244277886_0001_000001
>>> > 2016-01-19 20:04:11,657 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,674 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:11,765 WARN [main]
>>> org.apache.hadoop.util.NativeCodeLoader:
>>> > Unable to load native-hadoop library for your platform... using
>>> builtin-java
>>> > classes where applicable
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>>> > Service: , Ident:
>>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>>> > 2016-01-19 20:04:11,801 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>>> attempts: 2
>>> > for application: 1. Attempt num: 1 is last retry: false
>>> > 2016-01-19 20:04:11,806 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>>> > newApiCommitter.
>>> > 2016-01-19 20:04:11,934 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,939 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,948 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,953 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:12,464 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>>> > config null
>>> > 2016-01-19 20:04:12,526 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>>> > 2016-01-19 20:04:12,548 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> > 2016-01-19 20:04:12,549 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>>> > 2016-01-19 20:04:12,550 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>>> > 2016-01-19 20:04:12,551 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>>> > 2016-01-19 20:04:12,552 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>>> > 2016-01-19 20:04:12,557 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>>> > 2016-01-19 20:04:12,558 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>>> > 2016-01-19 20:04:12,559 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
>>> > class
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>>> after
>>> > creating 488, Expected: 504
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> Explicitly
>>> > setting permissions to : 504, rwxrwx---
>>> > 2016-01-19 20:04:12,731 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>>> > 2016-01-19 20:04:12,956 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> > hadoop-metrics2.properties
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period
>>> > at 10 second(s).
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>>> > system started
>>> > 2016-01-19 20:04:13,026 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>>> for
>>> > job_1453244277886_0001 to jobTokenSecretManager
>>> > 2016-01-19 20:04:13,139 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>>> > job_1453244277886_0001 because: not enabled;
>>> > 2016-01-19 20:04:13,154 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>>> > job_1453244277886_0001 = 343691. Number of splits = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>>> for
>>> > job job_1453244277886_0001 = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>>> > 2016-01-19 20:04:13,157 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>>> > 2016-01-19 20:04:13,186 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>>> > 2016-01-19 20:04:13,237 INFO [main]
>>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>>> server
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>>> > 2016-01-19 20:04:13,239 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> > org.mortbay.log.Slf4jLog
>>> > 2016-01-19 20:04:13,304 INFO [main]
>>> org.apache.hadoop.http.HttpRequestLog:
>>> > Http request log for http.requests.mapreduce is not defined
>>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added global filter 'safety'
>>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context mapreduce
>>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context static
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /mapreduce/*
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /ws/*
>>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Jetty bound to port 44070
>>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>>> >
>>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>>> > SelectChannelConnector@0.0.0.0:44070
>>> > 2016-01-19 20:04:13,647 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Web app /mapreduce started at 44070
>>> > 2016-01-19 20:04:13,956 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Registered webapp guice modules
>>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> JOB_CREATE
>>> > job_1453244277886_0001
>>> > 2016-01-19 20:04:13,961 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > nodeBlacklistingEnabled:true
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > maxTaskFailuresPerNode is 3
>>> > 2016-01-19 20:04:13,988 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > blacklistDisablePercent is 33
>>> > 2016-01-19 20:04:14,052 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,054 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:14,057 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,059 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:14,062 INFO [main]
>>> org.apache.hadoop.yarn.client.RMProxy:
>>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > maxContainerCapability: 2000
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>>> default
>>> > 2016-01-19 20:04:14,162 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Upper
>>> > limit on the thread pool size is 500
>>> > 2016-01-19 20:04:14,164 INFO [main]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > yarn.client.max-nodemanagers-proxies : 500
>>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_SETUP
>>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > mapResourceReqt:512
>>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > reduceResourceReqt:512
>>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>>> Writer
>>> > setup for JobId: job_1453244277886_0001, File:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>>> > HostLocal:0 RackLocal:0
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=1280
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000002 to
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-jar
>>> > file on the remote FS is
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-conf
>>> > file on the remote FS is
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>>> > tokens and #1 secret keys for NM use for launching container
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>>> > containertokens_dob is 1
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>>> shuffle
>>> > token in serviceData
>>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > Opening proxy : localhost:35711
>>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_0
>>> > : 13562
>>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>>> RUNNING
>>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000002
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000003,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000003 to
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_1
>>> > : 13562
>>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000003
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000004,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000004 to
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_2
>>> > : 13562
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000004
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> Blacklisted host
>>> > localhost
>>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>>> > blacklistRemovals=0
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>>> > blacklistRemovals=1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000005,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000005 to
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_3
>>> > : 13562
>>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000005
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>>> FAILED
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>>> Tasks: 1
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as
>>> tasks
>>> > failed. failedMaps:1 failedReduces:0
>>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>>> > KILL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to KILLED
>>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>>> the
>>> > event EventType: CONTAINER_DEALLOCATE
>>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>>> > deallocate container for task attemptId
>>> > attempt_1453244277886_0001_r_000000_0
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>>> KILLED
>>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_ABORT
>>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>>> cleanly so
>>> > this is the last retry
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>>> > isAMLastRetry: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> RMCommunicator
>>> > notified that shouldUnregistered is: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>>> isAMLastRetry:
>>> > true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> > JobHistoryEventHandler notified that forceJobCompletion is true
>>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all
>>> the
>>> > services
>>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold reached. Scheduling reduces.
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>>> > assigned. Ramping up all remaining reduces:1
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>>> > JobHistoryEventHandler. super.stop()
>>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>>> >
>>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History
>>> url is
>>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>>> > application to be successfully unregistered.
>>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final
>>> Stats:
>>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>>> AssignedReds:0
>>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>>> > RackLocal:0
>>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>>> directory
>>> > hdfs://hdnode01:54310
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>>> > Stopping server on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>>> > TaskHeartbeatHandler thread interrupted
>>> >
>>> >
>>> > Jps results, i believe that everything is ok, right?:
>>> > 21267 DataNode
>>> > 21609 ResourceManager
>>> > 21974 JobHistoryServer
>>> > 21735 NodeManager
>>> > 24546 Jps
>>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>>> > 21121 NameNode
>>> > 22098 QuorumPeerMain
>>> > 21456 SecondaryNameNode
>>> >
>>> >
>>>
>>
>>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Heh, good job and thank you very much for posting the solution here, not
many people do that :)

I don't have the feeling I helped much, but finding a solution is what
*counts*, not only *words* :D

Regards,
LLoyd

On 7 March 2016 at 22:50, José Luis Larroque <la...@gmail.com> wrote:

> Hi again guys, i could, finally, find what the issue was!!!
>
> This is my mapred-site.xml, here it's the problem :
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <!--
> <value>local</local> Para debug
> <value>hdnode01:54311</value> Para cosas posta
> -->
> <value>hdnode01:54311</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.map.tasks.maximum</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.job.maps</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> <property>
> <name>mapreduce.map.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.memory.mb</name>
> <value>512</value>
> </property>
>
> <property>
> <name>mapreduce.map.java.opts</name>
> <value>256</value>
> </property>
>
> <property>
> <name>mapreduce.reduce.java.opts</name>
> <value>256</value>
> </property>
> <configuration>
>
> If i suppress the last two properties ( mapreduce.map.java.opts ,
> mapreduce.reduce.java.opts ), wordcount works!
>
> I remember putting those last two properties for a memory issue of some
> kind, but maybe for some reason they clash with the others two (
> mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?
>
> It will be great if someone can give me a short explanation in order to
> understand better the memory management of a YARN cluster.
>
>
> PD: Thanks again Namikaze and Gaurav for their help!!
>
> Bye!
> Jose
>
> 2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:
>
>> Thanks Namikaze for keep trying, don't give up!! :D
>>
>> - I have these lines in *$HOME/.bashrc*
>>
>>
>> export HADOOP_PREFIX=/usr/local/hadoop
>>
>> # Others variables
>>
>> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>>
>> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>>
>>
>>   - in *hadoop-env.sh* i have:
>>
>> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>>
>>
>>   - I read that SO question and all answers to it. The only useful answer
>> in my opinion was checking yarn classpath. I have three times the following
>> line:
>>
>> /usr/local/hadoop/etc/hadoop:
>>
>>
>> I put yarn.application.classpath on yarn-site.xml because i don't know
>> any other way to fix it, with the value recomended for default in this
>> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
>> (see for yarn.application.classpath):
>>
>>
>> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
>> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
>> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
>> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>>
>>
>> But the classpath remains the same. And i can't find any other way to fix
>> it. Maybe this is the problem?
>>
>>
>>  - yarn.log-aggregation-enable was always set to true. I couldn't find
>> nothing in *datanodes logs*, here they are:
>>
>> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.4.0
>> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
>> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
>> STARTUP_MSG:   java = 1.7.0_79
>> ************************************************************/
>> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
>> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
>> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
>> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
>> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
>> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
>> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
>> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
>> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
>> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
>> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
>> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
>> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
>> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
>> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
>> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
>> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
>> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
>> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
>> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
>> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
>> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
>> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
>> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
>> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
>> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
>> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
>> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
>> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
>> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
>> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
>> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
>> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
>> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
>> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
>> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
>> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
>> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
>> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
>> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
>> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
>> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
>> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
>> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
>> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
>> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
>> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
>> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
>> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
>> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
>> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
>> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
>> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
>> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
>> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
>> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
>> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
>> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
>> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
>> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
>> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
>> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
>> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
>> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
>> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
>> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
>> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
>> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
>> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
>> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
>> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
>> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
>> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
>> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
>> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
>> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
>> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
>> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>>
>>
>>
>>
>> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>>> It could be a classpath issue (see
>>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>>> this is the case.
>>>
>>> You could drill down to the exact root cause by looking at the
>>> datanode logs (see
>>>
>>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>>> )
>>> But I'm not sure we would get another error than what we had...
>>>
>>> Check if your application has the correct values for the following
>>> variables:
>>> HADOOP_CONF_DIR
>>> HADOOP_COMMON_HOME
>>> HADOOP_HDFS_HOME
>>> HADOOP_MAPRED_HOME
>>> HADOOP_YARN_HOME
>>>
>>> I'm afraid I can't help you much more than this myself, sorry...
>>>
>>> LLoyd
>>>
>>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>>> wrote:
>>> > Hi guys, thanks for your answers.
>>> >
>>> > Wordcount logs:
>>> >
>>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>>> > hdnode01/192.168.0.10:8050
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop
>>> > library for your platform... using builtin-java classes where
>>> applicable
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>>> >
>>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 45
>>> > Log Contents:
>>> > Error: Could not find or load main class 256
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> >
>>> >
>>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>>> > ======================================================================
>>> > LogType: stderr
>>> > LogLength: 929
>>> > Log Contents:
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> > log4j:WARN No appenders could be found for logger
>>> > (org.apache.hadoop.ipc.Server).
>>> > log4j:WARN Please initialize the log4j system properly.
>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for
>>> > more info.
>>> >
>>> > LogType: stdout
>>> > LogLength: 0
>>> > Log Contents:
>>> >
>>> > LogType: syslog
>>> > LogLength: 56780
>>> > Log Contents:
>>> > 2016-01-19 20:04:11,329 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>>> > application appattempt_1453244277886_0001_000001
>>> > 2016-01-19 20:04:11,657 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,674 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:11,765 WARN [main]
>>> org.apache.hadoop.util.NativeCodeLoader:
>>> > Unable to load native-hadoop library for your platform... using
>>> builtin-java
>>> > classes where applicable
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>>> > 2016-01-19 20:04:11,776 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>>> > Service: , Ident:
>>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>>> > 2016-01-19 20:04:11,801 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>>> attempts: 2
>>> > for application: 1. Attempt num: 1 is last retry: false
>>> > 2016-01-19 20:04:11,806 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>>> > newApiCommitter.
>>> > 2016-01-19 20:04:11,934 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,939 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:11,948 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:11,953 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:12,464 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>>> > config null
>>> > 2016-01-19 20:04:12,526 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>>> > 2016-01-19 20:04:12,548 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> > 2016-01-19 20:04:12,549 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>>> > 2016-01-19 20:04:12,550 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>>> > 2016-01-19 20:04:12,551 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>>> > 2016-01-19 20:04:12,552 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>>> > 2016-01-19 20:04:12,557 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>>> class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>>> > 2016-01-19 20:04:12,558 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>>> > 2016-01-19 20:04:12,559 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> >
>>> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
>>> > class
>>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>>> after
>>> > creating 488, Expected: 504
>>> > 2016-01-19 20:04:12,615 INFO [main]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> Explicitly
>>> > setting permissions to : 504, rwxrwx---
>>> > 2016-01-19 20:04:12,731 INFO [main]
>>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>>> class
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>>> > 2016-01-19 20:04:12,956 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> > hadoop-metrics2.properties
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period
>>> > at 10 second(s).
>>> > 2016-01-19 20:04:13,018 INFO [main]
>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>>> > system started
>>> > 2016-01-19 20:04:13,026 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>>> for
>>> > job_1453244277886_0001 to jobTokenSecretManager
>>> > 2016-01-19 20:04:13,139 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>>> > job_1453244277886_0001 because: not enabled;
>>> > 2016-01-19 20:04:13,154 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>>> > job_1453244277886_0001 = 343691. Number of splits = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>>> for
>>> > job job_1453244277886_0001 = 1
>>> > 2016-01-19 20:04:13,156 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>>> > 2016-01-19 20:04:13,157 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>>> > 2016-01-19 20:04:13,186 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>>> > 2016-01-19 20:04:13,237 INFO [main]
>>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>>> server
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>>> > 2016-01-19 20:04:13,239 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> > org.mortbay.log.Slf4jLog
>>> > 2016-01-19 20:04:13,304 INFO [main]
>>> org.apache.hadoop.http.HttpRequestLog:
>>> > Http request log for http.requests.mapreduce is not defined
>>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added global filter 'safety'
>>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context mapreduce
>>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Added filter AM_PROXY_FILTER
>>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>>> > context static
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /mapreduce/*
>>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > adding path spec: /ws/*
>>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>>> > Jetty bound to port 44070
>>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>>> >
>>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>>> > SelectChannelConnector@0.0.0.0:44070
>>> > 2016-01-19 20:04:13,647 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Web app /mapreduce started at 44070
>>> > 2016-01-19 20:04:13,956 INFO [main]
>>> org.apache.hadoop.yarn.webapp.WebApps:
>>> > Registered webapp guice modules
>>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> JOB_CREATE
>>> > job_1453244277886_0001
>>> > 2016-01-19 20:04:13,961 INFO [main]
>>> org.apache.hadoop.ipc.CallQueueManager:
>>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > nodeBlacklistingEnabled:true
>>> > 2016-01-19 20:04:13,987 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > maxTaskFailuresPerNode is 3
>>> > 2016-01-19 20:04:13,988 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> > blacklistDisablePercent is 33
>>> > 2016-01-19 20:04:14,052 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,054 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>>> > 2016-01-19 20:04:14,057 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>>> > Ignoring.
>>> > 2016-01-19 20:04:14,059 WARN [main]
>>> org.apache.hadoop.conf.Configuration:
>>> > job.xml:an attempt to override final parameter:
>>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>>> > 2016-01-19 20:04:14,062 INFO [main]
>>> org.apache.hadoop.yarn.client.RMProxy:
>>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > maxContainerCapability: 2000
>>> > 2016-01-19 20:04:14,158 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>>> default
>>> > 2016-01-19 20:04:14,162 INFO [main]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Upper
>>> > limit on the thread pool size is 500
>>> > 2016-01-19 20:04:14,164 INFO [main]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > yarn.client.max-nodemanagers-proxies : 500
>>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_SETUP
>>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to
>>> SCHEDULED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > mapResourceReqt:512
>>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> > reduceResourceReqt:512
>>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>>> Writer
>>> > setup for JobId: job_1453244277886_0001, File:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>>> > HostLocal:0 RackLocal:0
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=1280
>>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000002 to
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-jar
>>> > file on the remote FS is
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>>> job-conf
>>> > file on the remote FS is
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>>> > tokens and #1 secret keys for NM use for launching container
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>>> > containertokens_dob is 1
>>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>>> shuffle
>>> > token in serviceData
>>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>>> >
>>> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>>> > Opening proxy : localhost:35711
>>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_0
>>> > : 13562
>>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>>> RUNNING
>>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000002
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000002 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000003,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000003 to
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_1
>>> > : 13562
>>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000003
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000003 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000004,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000004 to
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_2
>>> > : 13562
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000004
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000004 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>>> on
>>> > node localhost
>>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> Blacklisted host
>>> > localhost
>>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> NEW to
>>> > UNASSIGNED
>>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>>> > blacklistRemovals=0
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>>> > blacklistRemovals=1
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>>> allocated
>>> > containers 1
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>>> > container Container: [ContainerId:
>>> container_1453244277886_0001_01_000005,
>>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>>> ContainerToken,
>>> > service: 127.0.0.1:35711 }, ] to fast fail map
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> from
>>> > earlierFailedMaps
>>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>>> > container container_1453244277886_0001_01_000005 to
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>>> > /default-rack
>>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:1
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > UNASSIGNED to ASSIGNED
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Launching
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> Shuffle
>>> > port returned by ContainerManager for
>>> attempt_1453244277886_0001_m_000000_3
>>> > : 13562
>>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> TaskAttempt:
>>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> ASSIGNED
>>> > to RUNNING
>>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>>> > ATTEMPT_START task_1453244277886_0001_m_000000
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>>> getResources()
>>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>>> > completed container container_1453244277886_0001_01_000005
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold not met. completedMapsForReduceSlowstart 1
>>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> RUNNING
>>> > to FAIL_CONTAINER_CLEANUP
>>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> Diagnostics
>>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>>> >     at
>>> >
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>>> >     at
>>> >
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >     at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> >
>>> >
>>> > Container exited with a non-zero exit code 1
>>> >
>>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>>> > container_1453244277886_0001_01_000005 taskAttempt
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>>> KILLING
>>> > attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: TASK_ABORT
>>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>>> delete
>>> >
>>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>>> > FAIL_TASK_CLEANUP to FAILED
>>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>>> FAILED
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>>> Tasks: 1
>>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as
>>> tasks
>>> > failed. failedMaps:1 failedReduces:0
>>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>>> > KILL_WAIT
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>>> > UNASSIGNED to KILLED
>>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>>> the
>>> > event EventType: CONTAINER_DEALLOCATE
>>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>>> > deallocate container for task attemptId
>>> > attempt_1453244277886_0001_r_000000_0
>>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>>> KILLED
>>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>>> Processing
>>> > the event EventType: JOB_ABORT
>>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>>> cleanly so
>>> > this is the last retry
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>>> > isAMLastRetry: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> RMCommunicator
>>> > notified that shouldUnregistered is: true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>>> isAMLastRetry:
>>> > true
>>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>>> > JobHistoryEventHandler notified that forceJobCompletion is true
>>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all
>>> the
>>> > services
>>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>>> Recalculating
>>> > schedule, headroom=768
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>>> > start threshold reached. Scheduling reduces.
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>>> > assigned. Ramping up all remaining reduces:1
>>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1
>>> AssignedMaps:0
>>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>>> > HostLocal:1 RackLocal:0
>>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied
>>> to
>>> > done location:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>>> tmp to
>>> > done:
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>>> > to
>>> >
>>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>>> > JobHistoryEventHandler. super.stop()
>>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>>> >
>>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History
>>> url is
>>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>>> > application to be successfully unregistered.
>>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final
>>> Stats:
>>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>>> AssignedReds:0
>>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>>> > RackLocal:0
>>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>>> directory
>>> > hdfs://hdnode01:54310
>>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>>> > Stopping server on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>>> > TaskHeartbeatHandler thread interrupted
>>> >
>>> >
>>> > Jps results, i believe that everything is ok, right?:
>>> > 21267 DataNode
>>> > 21609 ResourceManager
>>> > 21974 JobHistoryServer
>>> > 21735 NodeManager
>>> > 24546 Jps
>>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>>> > 21121 NameNode
>>> > 22098 QuorumPeerMain
>>> > 21456 SecondaryNameNode
>>> >
>>> >
>>>
>>
>>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Hitesh Shah <hi...@apache.org>.
+common-user 

On Mar 7, 2016, at 3:42 PM, Hitesh Shah <hi...@apache.org> wrote:

> 
> On Mar 7, 2016, at 1:50 PM, José Luis Larroque <la...@gmail.com> wrote:
> 
>> Hi again guys, i could, finally, find what the issue was!!!
>> 
>> 
> 
>> <property>
>> <name>mapreduce.map.java.opts</name>
>> <value>256</value>
>> </property>
>> 
>> <property>
>> <name>mapreduce.reduce.java.opts</name>
>> <value>256</value>
>> </property>
>> <configuration>
>> 
>> If i suppress the last two properties ( mapreduce.map.java.opts , mapreduce.reduce.java.opts ), wordcount works!
>> 
> 
> That is because the java opts values are invalid. The value should be -Xmx256m for both values instead of just 256. 
> 
> thanks
> -- Hitesh


Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi again guys, i could, finally, find what the issue was!!!

This is my mapred-site.xml, here it's the problem :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<!--
<value>local</local> Para debug
<value>hdnode01:54311</value> Para cosas posta
-->
<value>hdnode01:54311</value>
</property>

<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>

<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>

<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>
<configuration>

If i suppress the last two properties ( mapreduce.map.java.opts ,
mapreduce.reduce.java.opts ), wordcount works!

I remember putting those last two properties for a memory issue of some
kind, but maybe for some reason they clash with the others two (
mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?

It will be great if someone can give me a short explanation in order to
understand better the memory management of a YARN cluster.


PD: Thanks again Namikaze and Gaurav for their help!!

Bye!
Jose

2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:

> Thanks Namikaze for keep trying, don't give up!! :D
>
> - I have these lines in *$HOME/.bashrc*
>
>
> export HADOOP_PREFIX=/usr/local/hadoop
>
> # Others variables
>
> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>
> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>
> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>
> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>
>
>   - in *hadoop-env.sh* i have:
>
> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>
>
>   - I read that SO question and all answers to it. The only useful answer
> in my opinion was checking yarn classpath. I have three times the following
> line:
>
> /usr/local/hadoop/etc/hadoop:
>
>
> I put yarn.application.classpath on yarn-site.xml because i don't know any
> other way to fix it, with the value recomended for default in this
> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
> (see for yarn.application.classpath):
>
>
> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>
>
> But the classpath remains the same. And i can't find any other way to fix
> it. Maybe this is the problem?
>
>
>  - yarn.log-aggregation-enable was always set to true. I couldn't find
> nothing in *datanodes logs*, here they are:
>
> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.4.0
> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
> STARTUP_MSG:   java = 1.7.0_79
> ************************************************************/
> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>
>
>
>
> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>
>> It could be a classpath issue (see
>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>> this is the case.
>>
>> You could drill down to the exact root cause by looking at the
>> datanode logs (see
>>
>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>> )
>> But I'm not sure we would get another error than what we had...
>>
>> Check if your application has the correct values for the following
>> variables:
>> HADOOP_CONF_DIR
>> HADOOP_COMMON_HOME
>> HADOOP_HDFS_HOME
>> HADOOP_MAPRED_HOME
>> HADOOP_YARN_HOME
>>
>> I'm afraid I can't help you much more than this myself, sorry...
>>
>> LLoyd
>>
>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi guys, thanks for your answers.
>> >
>> > Wordcount logs:
>> >
>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>> > hdnode01/192.168.0.10:8050
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> >
>> >
>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 929
>> > Log Contents:
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.hadoop.ipc.Server).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>> for
>> > more info.
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> > LogType: syslog
>> > LogLength: 56780
>> > Log Contents:
>> > 2016-01-19 20:04:11,329 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>> > application appattempt_1453244277886_0001_000001
>> > 2016-01-19 20:04:11,657 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,674 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:11,765 WARN [main]
>> org.apache.hadoop.util.NativeCodeLoader:
>> > Unable to load native-hadoop library for your platform... using
>> builtin-java
>> > classes where applicable
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>> > Service: , Ident:
>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>> > 2016-01-19 20:04:11,801 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>> attempts: 2
>> > for application: 1. Attempt num: 1 is last retry: false
>> > 2016-01-19 20:04:11,806 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>> > newApiCommitter.
>> > 2016-01-19 20:04:11,934 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,939 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,948 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,953 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:12,464 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>> > config null
>> > 2016-01-19 20:04:12,526 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>> > 2016-01-19 20:04:12,548 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> > 2016-01-19 20:04:12,549 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>> > 2016-01-19 20:04:12,550 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>> > 2016-01-19 20:04:12,551 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>> class
>> >
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>> > 2016-01-19 20:04:12,552 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>> > 2016-01-19 20:04:12,557 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>> > 2016-01-19 20:04:12,558 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>> > 2016-01-19 20:04:12,559 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
>> for
>> > class
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>> after
>> > creating 488, Expected: 504
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> Explicitly
>> > setting permissions to : 504, rwxrwx---
>> > 2016-01-19 20:04:12,731 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>> > 2016-01-19 20:04:12,956 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> > hadoop-metrics2.properties
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period
>> > at 10 second(s).
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>> > system started
>> > 2016-01-19 20:04:13,026 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>> for
>> > job_1453244277886_0001 to jobTokenSecretManager
>> > 2016-01-19 20:04:13,139 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>> > job_1453244277886_0001 because: not enabled;
>> > 2016-01-19 20:04:13,154 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>> > job_1453244277886_0001 = 343691. Number of splits = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>> for
>> > job job_1453244277886_0001 = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>> > 2016-01-19 20:04:13,157 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>> > 2016-01-19 20:04:13,186 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>> > 2016-01-19 20:04:13,237 INFO [main]
>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>> server
>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>> > 2016-01-19 20:04:13,239 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2016-01-19 20:04:13,304 INFO [main]
>> org.apache.hadoop.http.HttpRequestLog:
>> > Http request log for http.requests.mapreduce is not defined
>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added global filter 'safety'
>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context mapreduce
>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context static
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /mapreduce/*
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /ws/*
>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Jetty bound to port 44070
>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>> >
>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:44070
>> > 2016-01-19 20:04:13,647 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Web app /mapreduce started at 44070
>> > 2016-01-19 20:04:13,956 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Registered webapp guice modules
>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> JOB_CREATE
>> > job_1453244277886_0001
>> > 2016-01-19 20:04:13,961 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > nodeBlacklistingEnabled:true
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > maxTaskFailuresPerNode is 3
>> > 2016-01-19 20:04:13,988 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > blacklistDisablePercent is 33
>> > 2016-01-19 20:04:14,052 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,054 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:14,057 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,059 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:14,062 INFO [main]
>> org.apache.hadoop.yarn.client.RMProxy:
>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > maxContainerCapability: 2000
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>> default
>> > 2016-01-19 20:04:14,162 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
>> > limit on the thread pool size is 500
>> > 2016-01-19 20:04:14,164 INFO [main]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > yarn.client.max-nodemanagers-proxies : 500
>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_SETUP
>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > mapResourceReqt:512
>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > reduceResourceReqt:512
>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>> Writer
>> > setup for JobId: job_1453244277886_0001, File:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>> > HostLocal:0 RackLocal:0
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=1280
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000002 to
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
>> > file on the remote FS is
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>> job-conf
>> > file on the remote FS is
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>> > tokens and #1 secret keys for NM use for launching container
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>> > containertokens_dob is 1
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>> shuffle
>> > token in serviceData
>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > Opening proxy : localhost:35711
>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_0
>> > : 13562
>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>> RUNNING
>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000002
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000003,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000003 to
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_1
>> > : 13562
>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000003
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000004,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000004 to
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_2
>> > : 13562
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000004
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
>> host
>> > localhost
>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>> > blacklistRemovals=0
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>> > blacklistRemovals=1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000005,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000005 to
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_3
>> > : 13562
>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000005
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>> FAILED
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>> Tasks: 1
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
>> > failed. failedMaps:1 failedReduces:0
>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>> > KILL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to KILLED
>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>> the
>> > event EventType: CONTAINER_DEALLOCATE
>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>> > deallocate container for task attemptId
>> > attempt_1453244277886_0001_r_000000_0
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>> KILLED
>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_ABORT
>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>> cleanly so
>> > this is the last retry
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>> > isAMLastRetry: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> RMCommunicator
>> > notified that shouldUnregistered is: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>> isAMLastRetry:
>> > true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> > JobHistoryEventHandler notified that forceJobCompletion is true
>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
>> > services
>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold reached. Scheduling reduces.
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>> > assigned. Ramping up all remaining reduces:1
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>> > JobHistoryEventHandler. super.stop()
>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
>> is
>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>> > application to be successfully unregistered.
>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> AssignedReds:0
>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>> > RackLocal:0
>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>> directory
>> > hdfs://hdnode01:54310
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>> > Stopping server on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>> > TaskHeartbeatHandler thread interrupted
>> >
>> >
>> > Jps results, i believe that everything is ok, right?:
>> > 21267 DataNode
>> > 21609 ResourceManager
>> > 21974 JobHistoryServer
>> > 21735 NodeManager
>> > 24546 Jps
>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>> > 21121 NameNode
>> > 22098 QuorumPeerMain
>> > 21456 SecondaryNameNode
>> >
>> >
>>
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi again guys, i could, finally, find what the issue was!!!

This is my mapred-site.xml, here it's the problem :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<!--
<value>local</local> Para debug
<value>hdnode01:54311</value> Para cosas posta
-->
<value>hdnode01:54311</value>
</property>

<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>

<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>

<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>
<configuration>

If i suppress the last two properties ( mapreduce.map.java.opts ,
mapreduce.reduce.java.opts ), wordcount works!

I remember putting those last two properties for a memory issue of some
kind, but maybe for some reason they clash with the others two (
mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?

It will be great if someone can give me a short explanation in order to
understand better the memory management of a YARN cluster.


PD: Thanks again Namikaze and Gaurav for their help!!

Bye!
Jose

2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:

> Thanks Namikaze for keep trying, don't give up!! :D
>
> - I have these lines in *$HOME/.bashrc*
>
>
> export HADOOP_PREFIX=/usr/local/hadoop
>
> # Others variables
>
> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>
> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>
> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>
> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>
>
>   - in *hadoop-env.sh* i have:
>
> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>
>
>   - I read that SO question and all answers to it. The only useful answer
> in my opinion was checking yarn classpath. I have three times the following
> line:
>
> /usr/local/hadoop/etc/hadoop:
>
>
> I put yarn.application.classpath on yarn-site.xml because i don't know any
> other way to fix it, with the value recomended for default in this
> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
> (see for yarn.application.classpath):
>
>
> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>
>
> But the classpath remains the same. And i can't find any other way to fix
> it. Maybe this is the problem?
>
>
>  - yarn.log-aggregation-enable was always set to true. I couldn't find
> nothing in *datanodes logs*, here they are:
>
> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.4.0
> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
> STARTUP_MSG:   java = 1.7.0_79
> ************************************************************/
> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>
>
>
>
> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>
>> It could be a classpath issue (see
>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>> this is the case.
>>
>> You could drill down to the exact root cause by looking at the
>> datanode logs (see
>>
>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>> )
>> But I'm not sure we would get another error than what we had...
>>
>> Check if your application has the correct values for the following
>> variables:
>> HADOOP_CONF_DIR
>> HADOOP_COMMON_HOME
>> HADOOP_HDFS_HOME
>> HADOOP_MAPRED_HOME
>> HADOOP_YARN_HOME
>>
>> I'm afraid I can't help you much more than this myself, sorry...
>>
>> LLoyd
>>
>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi guys, thanks for your answers.
>> >
>> > Wordcount logs:
>> >
>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>> > hdnode01/192.168.0.10:8050
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> >
>> >
>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 929
>> > Log Contents:
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.hadoop.ipc.Server).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>> for
>> > more info.
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> > LogType: syslog
>> > LogLength: 56780
>> > Log Contents:
>> > 2016-01-19 20:04:11,329 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>> > application appattempt_1453244277886_0001_000001
>> > 2016-01-19 20:04:11,657 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,674 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:11,765 WARN [main]
>> org.apache.hadoop.util.NativeCodeLoader:
>> > Unable to load native-hadoop library for your platform... using
>> builtin-java
>> > classes where applicable
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>> > Service: , Ident:
>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>> > 2016-01-19 20:04:11,801 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>> attempts: 2
>> > for application: 1. Attempt num: 1 is last retry: false
>> > 2016-01-19 20:04:11,806 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>> > newApiCommitter.
>> > 2016-01-19 20:04:11,934 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,939 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,948 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,953 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:12,464 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>> > config null
>> > 2016-01-19 20:04:12,526 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>> > 2016-01-19 20:04:12,548 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> > 2016-01-19 20:04:12,549 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>> > 2016-01-19 20:04:12,550 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>> > 2016-01-19 20:04:12,551 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>> class
>> >
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>> > 2016-01-19 20:04:12,552 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>> > 2016-01-19 20:04:12,557 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>> > 2016-01-19 20:04:12,558 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>> > 2016-01-19 20:04:12,559 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
>> for
>> > class
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>> after
>> > creating 488, Expected: 504
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> Explicitly
>> > setting permissions to : 504, rwxrwx---
>> > 2016-01-19 20:04:12,731 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>> > 2016-01-19 20:04:12,956 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> > hadoop-metrics2.properties
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period
>> > at 10 second(s).
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>> > system started
>> > 2016-01-19 20:04:13,026 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>> for
>> > job_1453244277886_0001 to jobTokenSecretManager
>> > 2016-01-19 20:04:13,139 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>> > job_1453244277886_0001 because: not enabled;
>> > 2016-01-19 20:04:13,154 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>> > job_1453244277886_0001 = 343691. Number of splits = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>> for
>> > job job_1453244277886_0001 = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>> > 2016-01-19 20:04:13,157 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>> > 2016-01-19 20:04:13,186 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>> > 2016-01-19 20:04:13,237 INFO [main]
>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>> server
>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>> > 2016-01-19 20:04:13,239 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2016-01-19 20:04:13,304 INFO [main]
>> org.apache.hadoop.http.HttpRequestLog:
>> > Http request log for http.requests.mapreduce is not defined
>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added global filter 'safety'
>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context mapreduce
>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context static
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /mapreduce/*
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /ws/*
>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Jetty bound to port 44070
>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>> >
>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:44070
>> > 2016-01-19 20:04:13,647 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Web app /mapreduce started at 44070
>> > 2016-01-19 20:04:13,956 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Registered webapp guice modules
>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> JOB_CREATE
>> > job_1453244277886_0001
>> > 2016-01-19 20:04:13,961 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > nodeBlacklistingEnabled:true
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > maxTaskFailuresPerNode is 3
>> > 2016-01-19 20:04:13,988 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > blacklistDisablePercent is 33
>> > 2016-01-19 20:04:14,052 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,054 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:14,057 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,059 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:14,062 INFO [main]
>> org.apache.hadoop.yarn.client.RMProxy:
>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > maxContainerCapability: 2000
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>> default
>> > 2016-01-19 20:04:14,162 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
>> > limit on the thread pool size is 500
>> > 2016-01-19 20:04:14,164 INFO [main]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > yarn.client.max-nodemanagers-proxies : 500
>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_SETUP
>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > mapResourceReqt:512
>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > reduceResourceReqt:512
>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>> Writer
>> > setup for JobId: job_1453244277886_0001, File:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>> > HostLocal:0 RackLocal:0
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=1280
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000002 to
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
>> > file on the remote FS is
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>> job-conf
>> > file on the remote FS is
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>> > tokens and #1 secret keys for NM use for launching container
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>> > containertokens_dob is 1
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>> shuffle
>> > token in serviceData
>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > Opening proxy : localhost:35711
>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_0
>> > : 13562
>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>> RUNNING
>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000002
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000003,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000003 to
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_1
>> > : 13562
>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000003
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000004,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000004 to
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_2
>> > : 13562
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000004
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
>> host
>> > localhost
>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>> > blacklistRemovals=0
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>> > blacklistRemovals=1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000005,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000005 to
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_3
>> > : 13562
>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000005
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>> FAILED
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>> Tasks: 1
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
>> > failed. failedMaps:1 failedReduces:0
>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>> > KILL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to KILLED
>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>> the
>> > event EventType: CONTAINER_DEALLOCATE
>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>> > deallocate container for task attemptId
>> > attempt_1453244277886_0001_r_000000_0
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>> KILLED
>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_ABORT
>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>> cleanly so
>> > this is the last retry
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>> > isAMLastRetry: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> RMCommunicator
>> > notified that shouldUnregistered is: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>> isAMLastRetry:
>> > true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> > JobHistoryEventHandler notified that forceJobCompletion is true
>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
>> > services
>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold reached. Scheduling reduces.
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>> > assigned. Ramping up all remaining reduces:1
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>> > JobHistoryEventHandler. super.stop()
>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
>> is
>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>> > application to be successfully unregistered.
>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> AssignedReds:0
>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>> > RackLocal:0
>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>> directory
>> > hdfs://hdnode01:54310
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>> > Stopping server on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>> > TaskHeartbeatHandler thread interrupted
>> >
>> >
>> > Jps results, i believe that everything is ok, right?:
>> > 21267 DataNode
>> > 21609 ResourceManager
>> > 21974 JobHistoryServer
>> > 21735 NodeManager
>> > 24546 Jps
>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>> > 21121 NameNode
>> > 22098 QuorumPeerMain
>> > 21456 SecondaryNameNode
>> >
>> >
>>
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi again guys, i could, finally, find what the issue was!!!

This is my mapred-site.xml, here it's the problem :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<!--
<value>local</local> Para debug
<value>hdnode01:54311</value> Para cosas posta
-->
<value>hdnode01:54311</value>
</property>

<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>

<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>

<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>
<configuration>

If i suppress the last two properties ( mapreduce.map.java.opts ,
mapreduce.reduce.java.opts ), wordcount works!

I remember putting those last two properties for a memory issue of some
kind, but maybe for some reason they clash with the others two (
mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?

It will be great if someone can give me a short explanation in order to
understand better the memory management of a YARN cluster.


PD: Thanks again Namikaze and Gaurav for their help!!

Bye!
Jose

2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:

> Thanks Namikaze for keep trying, don't give up!! :D
>
> - I have these lines in *$HOME/.bashrc*
>
>
> export HADOOP_PREFIX=/usr/local/hadoop
>
> # Others variables
>
> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>
> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>
> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>
> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>
>
>   - in *hadoop-env.sh* i have:
>
> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>
>
>   - I read that SO question and all answers to it. The only useful answer
> in my opinion was checking yarn classpath. I have three times the following
> line:
>
> /usr/local/hadoop/etc/hadoop:
>
>
> I put yarn.application.classpath on yarn-site.xml because i don't know any
> other way to fix it, with the value recomended for default in this
> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
> (see for yarn.application.classpath):
>
>
> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>
>
> But the classpath remains the same. And i can't find any other way to fix
> it. Maybe this is the problem?
>
>
>  - yarn.log-aggregation-enable was always set to true. I couldn't find
> nothing in *datanodes logs*, here they are:
>
> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.4.0
> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
> STARTUP_MSG:   java = 1.7.0_79
> ************************************************************/
> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>
>
>
>
> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>
>> It could be a classpath issue (see
>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>> this is the case.
>>
>> You could drill down to the exact root cause by looking at the
>> datanode logs (see
>>
>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>> )
>> But I'm not sure we would get another error than what we had...
>>
>> Check if your application has the correct values for the following
>> variables:
>> HADOOP_CONF_DIR
>> HADOOP_COMMON_HOME
>> HADOOP_HDFS_HOME
>> HADOOP_MAPRED_HOME
>> HADOOP_YARN_HOME
>>
>> I'm afraid I can't help you much more than this myself, sorry...
>>
>> LLoyd
>>
>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi guys, thanks for your answers.
>> >
>> > Wordcount logs:
>> >
>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>> > hdnode01/192.168.0.10:8050
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> >
>> >
>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 929
>> > Log Contents:
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.hadoop.ipc.Server).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>> for
>> > more info.
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> > LogType: syslog
>> > LogLength: 56780
>> > Log Contents:
>> > 2016-01-19 20:04:11,329 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>> > application appattempt_1453244277886_0001_000001
>> > 2016-01-19 20:04:11,657 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,674 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:11,765 WARN [main]
>> org.apache.hadoop.util.NativeCodeLoader:
>> > Unable to load native-hadoop library for your platform... using
>> builtin-java
>> > classes where applicable
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>> > Service: , Ident:
>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>> > 2016-01-19 20:04:11,801 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>> attempts: 2
>> > for application: 1. Attempt num: 1 is last retry: false
>> > 2016-01-19 20:04:11,806 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>> > newApiCommitter.
>> > 2016-01-19 20:04:11,934 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,939 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,948 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,953 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:12,464 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>> > config null
>> > 2016-01-19 20:04:12,526 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>> > 2016-01-19 20:04:12,548 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> > 2016-01-19 20:04:12,549 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>> > 2016-01-19 20:04:12,550 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>> > 2016-01-19 20:04:12,551 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>> class
>> >
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>> > 2016-01-19 20:04:12,552 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>> > 2016-01-19 20:04:12,557 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>> > 2016-01-19 20:04:12,558 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>> > 2016-01-19 20:04:12,559 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
>> for
>> > class
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>> after
>> > creating 488, Expected: 504
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> Explicitly
>> > setting permissions to : 504, rwxrwx---
>> > 2016-01-19 20:04:12,731 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>> > 2016-01-19 20:04:12,956 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> > hadoop-metrics2.properties
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period
>> > at 10 second(s).
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>> > system started
>> > 2016-01-19 20:04:13,026 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>> for
>> > job_1453244277886_0001 to jobTokenSecretManager
>> > 2016-01-19 20:04:13,139 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>> > job_1453244277886_0001 because: not enabled;
>> > 2016-01-19 20:04:13,154 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>> > job_1453244277886_0001 = 343691. Number of splits = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>> for
>> > job job_1453244277886_0001 = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>> > 2016-01-19 20:04:13,157 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>> > 2016-01-19 20:04:13,186 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>> > 2016-01-19 20:04:13,237 INFO [main]
>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>> server
>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>> > 2016-01-19 20:04:13,239 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2016-01-19 20:04:13,304 INFO [main]
>> org.apache.hadoop.http.HttpRequestLog:
>> > Http request log for http.requests.mapreduce is not defined
>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added global filter 'safety'
>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context mapreduce
>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context static
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /mapreduce/*
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /ws/*
>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Jetty bound to port 44070
>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>> >
>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:44070
>> > 2016-01-19 20:04:13,647 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Web app /mapreduce started at 44070
>> > 2016-01-19 20:04:13,956 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Registered webapp guice modules
>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> JOB_CREATE
>> > job_1453244277886_0001
>> > 2016-01-19 20:04:13,961 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > nodeBlacklistingEnabled:true
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > maxTaskFailuresPerNode is 3
>> > 2016-01-19 20:04:13,988 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > blacklistDisablePercent is 33
>> > 2016-01-19 20:04:14,052 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,054 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:14,057 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,059 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:14,062 INFO [main]
>> org.apache.hadoop.yarn.client.RMProxy:
>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > maxContainerCapability: 2000
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>> default
>> > 2016-01-19 20:04:14,162 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
>> > limit on the thread pool size is 500
>> > 2016-01-19 20:04:14,164 INFO [main]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > yarn.client.max-nodemanagers-proxies : 500
>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_SETUP
>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > mapResourceReqt:512
>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > reduceResourceReqt:512
>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>> Writer
>> > setup for JobId: job_1453244277886_0001, File:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>> > HostLocal:0 RackLocal:0
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=1280
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000002 to
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
>> > file on the remote FS is
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>> job-conf
>> > file on the remote FS is
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>> > tokens and #1 secret keys for NM use for launching container
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>> > containertokens_dob is 1
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>> shuffle
>> > token in serviceData
>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > Opening proxy : localhost:35711
>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_0
>> > : 13562
>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>> RUNNING
>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000002
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000003,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000003 to
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_1
>> > : 13562
>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000003
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000004,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000004 to
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_2
>> > : 13562
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000004
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
>> host
>> > localhost
>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>> > blacklistRemovals=0
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>> > blacklistRemovals=1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000005,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000005 to
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_3
>> > : 13562
>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000005
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>> FAILED
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>> Tasks: 1
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
>> > failed. failedMaps:1 failedReduces:0
>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>> > KILL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to KILLED
>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>> the
>> > event EventType: CONTAINER_DEALLOCATE
>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>> > deallocate container for task attemptId
>> > attempt_1453244277886_0001_r_000000_0
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>> KILLED
>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_ABORT
>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>> cleanly so
>> > this is the last retry
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>> > isAMLastRetry: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> RMCommunicator
>> > notified that shouldUnregistered is: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>> isAMLastRetry:
>> > true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> > JobHistoryEventHandler notified that forceJobCompletion is true
>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
>> > services
>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold reached. Scheduling reduces.
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>> > assigned. Ramping up all remaining reduces:1
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>> > JobHistoryEventHandler. super.stop()
>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
>> is
>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>> > application to be successfully unregistered.
>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> AssignedReds:0
>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>> > RackLocal:0
>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>> directory
>> > hdfs://hdnode01:54310
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>> > Stopping server on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>> > TaskHeartbeatHandler thread interrupted
>> >
>> >
>> > Jps results, i believe that everything is ok, right?:
>> > 21267 DataNode
>> > 21609 ResourceManager
>> > 21974 JobHistoryServer
>> > 21735 NodeManager
>> > 24546 Jps
>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>> > 21121 NameNode
>> > 22098 QuorumPeerMain
>> > 21456 SecondaryNameNode
>> >
>> >
>>
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi again guys, i could, finally, find what the issue was!!!

This is my mapred-site.xml, here it's the problem :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<!--
<value>local</local> Para debug
<value>hdnode01:54311</value> Para cosas posta
-->
<value>hdnode01:54311</value>
</property>

<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>

<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>

<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>
<configuration>

If i suppress the last two properties ( mapreduce.map.java.opts ,
mapreduce.reduce.java.opts ), wordcount works!

I remember putting those last two properties for a memory issue of some
kind, but maybe for some reason they clash with the others two (
mapreduce.map.memory.mb, mapreduce.reduce.memory.mb) ?

It will be great if someone can give me a short explanation in order to
understand better the memory management of a YARN cluster.


PD: Thanks again Namikaze and Gaurav for their help!!

Bye!
Jose

2016-01-25 21:19 GMT-03:00 José Luis Larroque <la...@gmail.com>:

> Thanks Namikaze for keep trying, don't give up!! :D
>
> - I have these lines in *$HOME/.bashrc*
>
>
> export HADOOP_PREFIX=/usr/local/hadoop
>
> # Others variables
>
> export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
>
> export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
>
> export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
>
> export HADOOP_YARN_HOME=${HADOOP_PREFIX}
>
>
>   - in *hadoop-env.sh* i have:
>
> export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}
>
>
>   - I read that SO question and all answers to it. The only useful answer
> in my opinion was checking yarn classpath. I have three times the following
> line:
>
> /usr/local/hadoop/etc/hadoop:
>
>
> I put yarn.application.classpath on yarn-site.xml because i don't know any
> other way to fix it, with the value recomended for default in this
> <https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
> (see for yarn.application.classpath):
>
>
> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
> $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
> $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $HADOOP_YARN_HOME/share/hadoop/yarn/*,
> $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
>
>
> But the classpath remains the same. And i can't find any other way to fix
> it. Maybe this is the problem?
>
>
>  - yarn.log-aggregation-enable was always set to true. I couldn't find
> nothing in *datanodes logs*, here they are:
>
> 2016-01-25 21:13:07,006 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.4.0
> STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
> STARTUP_MSG:   java = 1.7.0_79
> ************************************************************/
> 2016-01-25 21:13:07,015 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
> 2016-01-25 21:13:07,188 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 2016-01-25 21:13:07,648 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
> 2016-01-25 21:13:07,723 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
> 2016-01-25 21:13:07,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is localhost
> 2016-01-25 21:13:07,728 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
> 2016-01-25 21:13:07,757 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
> 2016-01-25 21:13:07,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
> 2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
> 2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
> 2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
> 2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
> 2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
> 2016-01-25 21:13:08,137 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
> 2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
> 2016-01-25 21:13:08,288 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
> 2016-01-25 21:13:08,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
> 2016-01-25 21:13:08,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
> 2016-01-25 21:13:08,321 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:08,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hdnode01/192.168.0.10:54310 starting to offer service
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
> 2016-01-25 21:13:08,719 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
> 2016-01-25 21:13:08,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename 10365@jose-ubuntu
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data is not formatted
> 2016-01-25 21:13:08,833 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845 is not formatted.
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 2016-01-25 21:13:09,018 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-216406264-127.0.0.1-1453767164845 directory /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
> 2016-01-25 21:13:09,072 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
> 2016-01-25 21:13:09,198 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
> 2016-01-25 21:13:09,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
> 2016-01-25 21:13:09,268 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop/dfs/name/data should be specified as a URI in configuration files. Please update hdfs configuration.
> 2016-01-25 21:13:09,270 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType: DISK
> 2016-01-25 21:13:09,279 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
> 2016-01-25 21:13:09,282 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1453784080282 with interval 21600000
> 2016-01-25 21:13:09,283 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,284 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,299 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on /usr/local/hadoop/dfs/name/data/current: 15ms
> 2016-01-25 21:13:09,300 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-216406264-127.0.0.1-1453767164845: 17ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current...
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-216406264-127.0.0.1-1453767164845 on volume /usr/local/hadoop/dfs/name/data/current: 0ms
> 2016-01-25 21:13:09,301 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
> 2016-01-25 21:13:09,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 beginning handshake with NN
> 2016-01-25 21:13:09,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to hdnode01/192.168.0.10:54310 successfully registered with NN
> 2016-01-25 21:13:09,356 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
> 2016-01-25 21:13:09,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to hdnode01/192.168.0.10:54310
> 2016-01-25 21:13:09,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 42 msecs for RPC and NN processing.  Got back commands none
> 2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max memory 1.8 GB = 9.1 MB
> 2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
> 2016-01-25 21:13:09,495 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-216406264-127.0.0.1-1453767164845
> 2016-01-25 21:13:09,499 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new size=1
> 2016-01-25 21:13:32,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src: /192.168.0.10:58649 dest: /192.168.0.10:50010
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration: 98632367
> 2016-01-25 21:13:32,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:13:34,291 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
> 2016-01-25 21:14:10,176 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src: /192.168.0.10:58663 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 42378742
> 2016-01-25 21:14:10,221 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,714 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src: /192.168.0.10:58664 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 2656758
> 2016-01-25 21:14:10,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:10,853 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src: /192.168.0.10:58665 dest: /192.168.0.10:50010
> 2016-01-25 21:14:10,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 3257396
> 2016-01-25 21:14:10,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:11,717 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src: /192.168.0.10:58666 dest: /192.168.0.10:50010
> 2016-01-25 21:14:11,726 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 6180229
> 2016-01-25 21:14:11,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:14,298 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
> 2016-01-25 21:14:14,299 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
> 2016-01-25 21:14:14,305 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
> 2016-01-25 21:14:16,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration: 2878920
> 2016-01-25 21:14:16,253 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 236423
> 2016-01-25 21:14:16,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration: 909236
> 2016-01-25 21:14:16,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration: 1489437
> 2016-01-25 21:14:20,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration: 899980
> 2016-01-25 21:14:22,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src: /192.168.0.10:58679 dest: /192.168.0.10:50010
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 60114851
> 2016-01-25 21:14:22,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:24,319 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
> 2016-01-25 21:14:25,808 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src: /192.168.0.10:58681 dest: /192.168.0.10:50010
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 9975409048
> 2016-01-25 21:14:35,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,066 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src: /192.168.0.10:58682 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 4992595
> 2016-01-25 21:14:36,075 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,548 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration: 497225
> 2016-01-25 21:14:36,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src: /192.168.0.10:58684 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,572 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration: 2649337
> 2016-01-25 21:14:36,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:36,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration: 379439
> 2016-01-25 21:14:36,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src: /192.168.0.10:58685 dest: /192.168.0.10:50010
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration: 3135698
> 2016-01-25 21:14:36,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:39,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
> 2016-01-25 21:14:39,336 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
> 2016-01-25 21:14:39,337 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
> 2016-01-25 21:14:39,338 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
> 2016-01-25 21:14:39,376 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827 for deletion
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
> 2016-01-25 21:14:39,379 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829 for deletion
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
> 2016-01-25 21:14:39,380 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830 for deletion
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
> 2016-01-25 21:14:39,381 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831 for deletion
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
> 2016-01-25 21:14:39,382 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
> 2016-01-25 21:14:44,797 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src: /192.168.0.10:58688 dest: /192.168.0.10:50010
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration: 34522284
> 2016-01-25 21:14:44,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2016-01-25 21:14:49,343 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
> 2016-01-25 21:16:33,785 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0, srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid: BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration: 284719
> 2016-01-25 21:16:36,371 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832 for deletion
> 2016-01-25 21:16:36,372 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file /usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
>
>
>
>
> 2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>
>> It could be a classpath issue (see
>> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
>> this is the case.
>>
>> You could drill down to the exact root cause by looking at the
>> datanode logs (see
>>
>> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
>> )
>> But I'm not sure we would get another error than what we had...
>>
>> Check if your application has the correct values for the following
>> variables:
>> HADOOP_CONF_DIR
>> HADOOP_COMMON_HOME
>> HADOOP_HDFS_HOME
>> HADOOP_MAPRED_HOME
>> HADOOP_YARN_HOME
>>
>> I'm afraid I can't help you much more than this myself, sorry...
>>
>> LLoyd
>>
>> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi guys, thanks for your answers.
>> >
>> > Wordcount logs:
>> >
>> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
>> > hdnode01/192.168.0.10:8050
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
>> >
>> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>> >
>> >
>> > Container: container_1453244277886_0001_01_000002 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000003 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000004 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000005 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 45
>> > Log Contents:
>> > Error: Could not find or load main class 256
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> >
>> >
>> > Container: container_1453244277886_0001_01_000001 on localhost_35711
>> > ======================================================================
>> > LogType: stderr
>> > LogLength: 929
>> > Log Contents:
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.hadoop.ipc.Server).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>> for
>> > more info.
>> >
>> > LogType: stdout
>> > LogLength: 0
>> > Log Contents:
>> >
>> > LogType: syslog
>> > LogLength: 56780
>> > Log Contents:
>> > 2016-01-19 20:04:11,329 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
>> > application appattempt_1453244277886_0001_000001
>> > 2016-01-19 20:04:11,657 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,674 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:11,765 WARN [main]
>> org.apache.hadoop.util.NativeCodeLoader:
>> > Unable to load native-hadoop library for your platform... using
>> builtin-java
>> > classes where applicable
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
>> > 2016-01-19 20:04:11,776 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
>> > Service: , Ident:
>> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
>> > 2016-01-19 20:04:11,801 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
>> attempts: 2
>> > for application: 1. Attempt num: 1 is last retry: false
>> > 2016-01-19 20:04:11,806 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
>> > newApiCommitter.
>> > 2016-01-19 20:04:11,934 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,939 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:11,948 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:11,953 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:12,464 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
>> > config null
>> > 2016-01-19 20:04:12,526 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>> > 2016-01-19 20:04:12,548 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> > 2016-01-19 20:04:12,549 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>> > 2016-01-19 20:04:12,550 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>> > 2016-01-19 20:04:12,551 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
>> class
>> >
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
>> > 2016-01-19 20:04:12,552 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>> > 2016-01-19 20:04:12,557 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
>> > 2016-01-19 20:04:12,558 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>> > 2016-01-19 20:04:12,559 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
>> for
>> > class
>> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
>> after
>> > creating 488, Expected: 504
>> > 2016-01-19 20:04:12,615 INFO [main]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> Explicitly
>> > setting permissions to : 504, rwxrwx---
>> > 2016-01-19 20:04:12,731 INFO [main]
>> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
>> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
>> class
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
>> > 2016-01-19 20:04:12,956 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> > hadoop-metrics2.properties
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period
>> > at 10 second(s).
>> > 2016-01-19 20:04:13,018 INFO [main]
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
>> > system started
>> > 2016-01-19 20:04:13,026 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token
>> for
>> > job_1453244277886_0001 to jobTokenSecretManager
>> > 2016-01-19 20:04:13,139 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
>> > job_1453244277886_0001 because: not enabled;
>> > 2016-01-19 20:04:13,154 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
>> > job_1453244277886_0001 = 343691. Number of splits = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
>> for
>> > job job_1453244277886_0001 = 1
>> > 2016-01-19 20:04:13,156 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from NEW to INITED
>> > 2016-01-19 20:04:13,157 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
>> > normal, non-uberized, multi-container job job_1453244277886_0001.
>> > 2016-01-19 20:04:13,186 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
>> > 2016-01-19 20:04:13,237 INFO [main]
>> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
>> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
>> server
>> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
>> > 2016-01-19 20:04:13,239 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
>> > MRClientService at jose-ubuntu/127.0.0.1:56461
>> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> > org.mortbay.log.Slf4jLog
>> > 2016-01-19 20:04:13,304 INFO [main]
>> org.apache.hadoop.http.HttpRequestLog:
>> > Http request log for http.requests.mapreduce is not defined
>> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added global filter 'safety'
>> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context mapreduce
>> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Added filter AM_PROXY_FILTER
>> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
>> > context static
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /mapreduce/*
>> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > adding path spec: /ws/*
>> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
>> > Jetty bound to port 44070
>> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
>> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
>> >
>> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
>> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
>> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:44070
>> > 2016-01-19 20:04:13,647 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Web app /mapreduce started at 44070
>> > 2016-01-19 20:04:13,956 INFO [main]
>> org.apache.hadoop.yarn.webapp.WebApps:
>> > Registered webapp guice modules
>> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> JOB_CREATE
>> > job_1453244277886_0001
>> > 2016-01-19 20:04:13,961 INFO [main]
>> org.apache.hadoop.ipc.CallQueueManager:
>> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
>> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
>> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > nodeBlacklistingEnabled:true
>> > 2016-01-19 20:04:13,987 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > maxTaskFailuresPerNode is 3
>> > 2016-01-19 20:04:13,988 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> > blacklistDisablePercent is 33
>> > 2016-01-19 20:04:14,052 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,054 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> > 2016-01-19 20:04:14,057 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
>> > Ignoring.
>> > 2016-01-19 20:04:14,059 WARN [main]
>> org.apache.hadoop.conf.Configuration:
>> > job.xml:an attempt to override final parameter:
>> > mapreduce.job.end-notification.max.attempts;  Ignoring.
>> > 2016-01-19 20:04:14,062 INFO [main]
>> org.apache.hadoop.yarn.client.RMProxy:
>> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > maxContainerCapability: 2000
>> > 2016-01-19 20:04:14,158 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
>> default
>> > 2016-01-19 20:04:14,162 INFO [main]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
>> > limit on the thread pool size is 500
>> > 2016-01-19 20:04:14,164 INFO [main]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > yarn.client.max-nodemanagers-proxies : 500
>> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from INITED to SETUP
>> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_SETUP
>> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
>> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:14,233 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > mapResourceReqt:512
>> > 2016-01-19 20:04:14,245 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> > reduceResourceReqt:512
>> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
>> Writer
>> > setup for JobId: job_1453244277886_0001, File:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
>> > HostLocal:0 RackLocal:0
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=1280
>> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000002 to
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
>> > file on the remote FS is
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
>> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The
>> job-conf
>> > file on the remote FS is
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
>> > tokens and #1 secret keys for NM use for launching container
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
>> > containertokens_dob is 1
>> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
>> shuffle
>> > token in serviceData
>> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
>> > Opening proxy : localhost:35711
>> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_0
>> > : 13562
>> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_0] using containerId:
>> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
>> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
>> RUNNING
>> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000002
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_0: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000002 taskAttempt
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
>> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:18,327 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:18,329 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
>> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000003,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000003 to
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_1
>> > : 13562
>> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_1] using containerId:
>> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
>> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000003
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_1: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000003 taskAttempt
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
>> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:21,313 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:21,314 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
>> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000004,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000004 to
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_2
>> > : 13562
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_2] using containerId:
>> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000004
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_2: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000004 taskAttempt
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures
>> on
>> > node localhost
>> > 2016-01-19 20:04:24,342 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
>> host
>> > localhost
>> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
>> to
>> > UNASSIGNED
>> > 2016-01-19 20:04:24,343 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
>> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
>> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=1
>> > blacklistRemovals=0
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
>> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
>> > blacklist for application_1453244277886_0001: blacklistAdditions=0
>> > blacklistRemovals=1
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got
>> allocated
>> > containers 1
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
>> > container Container: [ContainerId:
>> container_1453244277886_0001_01_000005,
>> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
>> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind:
>> ContainerToken,
>> > service: 127.0.0.1:35711 }, ] to fast fail map
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> from
>> > earlierFailedMaps
>> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
>> > container container_1453244277886_0001_01_000005 to
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
>> > /default-rack
>> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > UNASSIGNED to ASSIGNED
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Launching
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> Shuffle
>> > port returned by ContainerManager for
>> attempt_1453244277886_0001_m_000000_3
>> > : 13562
>> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> TaskAttempt:
>> > [attempt_1453244277886_0001_m_000000_3] using containerId:
>> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> ASSIGNED
>> > to RUNNING
>> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
>> > ATTEMPT_START task_1453244277886_0001_m_000000
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
>> getResources()
>> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
>> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
>> > completed container container_1453244277886_0001_01_000005
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold not met. completedMapsForReduceSlowstart 1
>> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> RUNNING
>> > to FAIL_CONTAINER_CLEANUP
>> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
>> > report from attempt_1453244277886_0001_m_000000_3: Exception from
>> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >     at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >     at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >     at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> > Container exited with a non-zero exit code 1
>> >
>> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
>> > container_1453244277886_0001_01_000005 taskAttempt
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
>> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
>> KILLING
>> > attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
>> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: TASK_ABORT
>> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
>> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
>> delete
>> >
>> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
>> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
>> > FAIL_TASK_CLEANUP to FAILED
>> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to
>> FAILED
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
>> Tasks: 1
>> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
>> > failed. failedMaps:1 failedReduces:0
>> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
>> > KILL_WAIT
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
>> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
>> > UNASSIGNED to KILLED
>> > 2016-01-19 20:04:28,383 INFO [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
>> the
>> > event EventType: CONTAINER_DEALLOCATE
>> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
>> > deallocate container for task attemptId
>> > attempt_1453244277886_0001_r_000000_0
>> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
>> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
>> KILLED
>> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
>> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
>> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
>> Processing
>> > the event EventType: JOB_ABORT
>> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
>> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
>> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing
>> cleanly so
>> > this is the last retry
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
>> > isAMLastRetry: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> RMCommunicator
>> > notified that shouldUnregistered is: true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
>> isAMLastRetry:
>> > true
>> > 2016-01-19 20:04:28,433 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
>> > JobHistoryEventHandler notified that forceJobCompletion is true
>> > 2016-01-19 20:04:28,434 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
>> > services
>> > 2016-01-19 20:04:28,435 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
>> > JobHistoryEventHandler. Size of the outstanding queue size is 0
>> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
>> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
>> Recalculating
>> > schedule, headroom=768
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
>> > start threshold reached. Scheduling reduces.
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
>> > assigned. Ramping up all remaining reduces:1
>> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
>> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
>> > HostLocal:1 RackLocal:0
>> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
>> > done location:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
>> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
>> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved
>> tmp to
>> > done:
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
>> > to
>> >
>> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
>> > 2016-01-19 20:04:30,071 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
>> > JobHistoryEventHandler. super.stop()
>> > 2016-01-19 20:04:30,078 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
>> > diagnostics to Task failed task_1453244277886_0001_m_000000
>> > Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> > 2016-01-19 20:04:30,080 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
>> is
>> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
>> > 2016-01-19 20:04:30,094 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
>> > application to be successfully unregistered.
>> > 2016-01-19 20:04:31,099 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
>> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
>> AssignedReds:0
>> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
>> > RackLocal:0
>> > 2016-01-19 20:04:31,104 INFO [Thread-61]
>> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
>> directory
>> > hdfs://hdnode01:54310
>> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
>> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
>> > Stopping server on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
>> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
>> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
>> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
>> > TaskHeartbeatHandler thread interrupted
>> >
>> >
>> > Jps results, i believe that everything is ok, right?:
>> > 21267 DataNode
>> > 21609 ResourceManager
>> > 21974 JobHistoryServer
>> > 21735 NodeManager
>> > 24546 Jps
>> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
>> > 21121 NameNode
>> > 22098 QuorumPeerMain
>> > 21456 SecondaryNameNode
>> >
>> >
>>
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks Namikaze for keep trying, don't give up!! :D

- I have these lines in *$HOME/.bashrc*


export HADOOP_PREFIX=/usr/local/hadoop

# Others variables

export HADOOP_COMMON_HOME=${HADOOP_PREFIX}

export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}

export HADOOP_HDFS_HOME=${HADOOP_PREFIX}

export HADOOP_YARN_HOME=${HADOOP_PREFIX}


  - in *hadoop-env.sh* i have:

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}


  - I read that SO question and all answers to it. The only useful answer
in my opinion was checking yarn classpath. I have three times the following
line:

/usr/local/hadoop/etc/hadoop:


I put yarn.application.classpath on yarn-site.xml because i don't know any
other way to fix it, with the value recomended for default in this
<https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
(see for yarn.application.classpath):


$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*


But the classpath remains the same. And i can't find any other way to fix
it. Maybe this is the problem?


 - yarn.log-aggregation-enable was always set to true. I couldn't find
nothing in *datanodes logs*, here they are:

2016-01-25 21:13:07,006 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.4.0
STARTUP_MSG:   classpath =
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common
-r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2016-01-25 21:13:07,015 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX
signal handlers for [TERM, HUP, INT]
2016-01-25 21:13:07,188 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2016-01-25 21:13:07,648 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2016-01-25 21:13:07,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname
is localhost
2016-01-25 21:13:07,728 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode
with maxLockedMemory = 0
2016-01-25 21:13:07,757 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming
server at /0.0.0.0:50010
2016-01-25 21:13:07,760 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.datanode is not defined
2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context datanode
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50075
2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
2016-01-25 21:13:08,137 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting
Socket Reader #1 for port 50020
2016-01-25 21:13:08,288 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at
/0.0.0.0:50020
2016-01-25 21:13:08,300 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request
received for nameservices: null
2016-01-25 21:13:08,316 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting
BPOfferServices for nameservices: <default>
2016-01-25 21:13:08,321 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:08,325 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
<registering> (Datanode Uuid unassigned) service to
hdnode01/192.168.0.10:54310 starting to offer service
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2016-01-25 21:13:08,719 INFO
org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55
and name-node layout version: -56
2016-01-25 21:13:08,828 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename
10365@jose-ubuntu
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data is not formatted
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845
is not formatted.
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool
BP-216406264-127.0.0.1-1453767164845 directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
2016-01-25 21:13:09,072 INFO
org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files
from trash.
2016-01-25 21:13:09,198 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
2016-01-25 21:13:09,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and
persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
2016-01-25 21:13:09,268 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:09,270 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType:
DISK
2016-01-25 21:13:09,279 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Registered FSDatasetState MBean
2016-01-25 21:13:09,282 INFO
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic
Directory Tree Verification scan starting at 1453784080282 with
interval 21600000
2016-01-25 21:13:09,283 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,284 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,299 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on
/usr/local/hadoop/dfs/name/data/current: 15ms
2016-01-25 21:13:09,300 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to scan all replicas for block pool
BP-216406264-127.0.0.1-1453767164845: 17ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time to add replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current: 0ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to add all replicas to map: 1ms
2016-01-25 21:13:09,305 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 beginning handshake with NN
2016-01-25 21:13:09,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 successfully registered with NN
2016-01-25 21:13:09,356 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode
hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec
 BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of
10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid
6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE
Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode
Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310
2016-01-25 21:13:09,487 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0
blocks total. Took 1 msec to generate and 42 msecs for RPC and NN
processing.  Got back commands none
2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlockMap
2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max
memory 1.8 GB = 9.1 MB
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2016-01-25 21:13:09,495 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic
Block Verification Scanner initialized with interval 504 hours for
block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,499 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added
bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new
size=1
2016-01-25 21:13:32,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src:
/192.168.0.10:58649 dest: /192.168.0.10:50010
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration:
98632367
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:13:34,291 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
2016-01-25 21:14:10,176 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src:
/192.168.0.10:58663 dest: /192.168.0.10:50010
2016-01-25 21:14:10,220 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
42378742
2016-01-25 21:14:10,221 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,714 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src:
/192.168.0.10:58664 dest: /192.168.0.10:50010
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
2656758
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,853 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src:
/192.168.0.10:58665 dest: /192.168.0.10:50010
2016-01-25 21:14:10,860 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
3257396
2016-01-25 21:14:10,861 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:11,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src:
/192.168.0.10:58666 dest: /192.168.0.10:50010
2016-01-25 21:14:11,726 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
6180229
2016-01-25 21:14:11,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:14,298 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
2016-01-25 21:14:14,299 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
2016-01-25 21:14:16,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
2878920
2016-01-25 21:14:16,253 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
236423
2016-01-25 21:14:16,312 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
909236
2016-01-25 21:14:16,364 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
1489437
2016-01-25 21:14:20,174 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
899980
2016-01-25 21:14:22,692 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src:
/192.168.0.10:58679 dest: /192.168.0.10:50010
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
60114851
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:24,319 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
2016-01-25 21:14:25,808 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src:
/192.168.0.10:58681 dest: /192.168.0.10:50010
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
9975409048
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,066 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src:
/192.168.0.10:58682 dest: /192.168.0.10:50010
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
4992595
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,548 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
497225
2016-01-25 21:14:36,564 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src:
/192.168.0.10:58684 dest: /192.168.0.10:50010
2016-01-25 21:14:36,572 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration:
2649337
2016-01-25 21:14:36,573 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
379439
2016-01-25 21:14:36,638 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src:
/192.168.0.10:58685 dest: /192.168.0.10:50010
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration:
3135698
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:39,335 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
2016-01-25 21:14:39,336 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
2016-01-25 21:14:39,337 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
2016-01-25 21:14:39,338 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
2016-01-25 21:14:39,376 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
for deletion
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
for deletion
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
2016-01-25 21:14:44,797 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src:
/192.168.0.10:58688 dest: /192.168.0.10:50010
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration:
34522284
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:49,343 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
2016-01-25 21:16:33,785 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
284719
2016-01-25 21:16:36,371 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
for deletion
2016-01-25 21:16:36,372 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832




2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> It could be a classpath issue (see
> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
> this is the case.
>
> You could drill down to the exact root cause by looking at the
> datanode logs (see
>
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
> )
> But I'm not sure we would get another error than what we had...
>
> Check if your application has the correct values for the following
> variables:
> HADOOP_CONF_DIR
> HADOOP_COMMON_HOME
> HADOOP_HDFS_HOME
> HADOOP_MAPRED_HOME
> HADOOP_YARN_HOME
>
> I'm afraid I can't help you much more than this myself, sorry...
>
> LLoyd
>
> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi guys, thanks for your answers.
> >
> > Wordcount logs:
> >
> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> > hdnode01/192.168.0.10:8050
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
> native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> >
> >
> > Container: container_1453244277886_0001_01_000002 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000003 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000004 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000005 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000001 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 929
> > Log Contents:
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > log4j:WARN No appenders could be found for logger
> > (org.apache.hadoop.ipc.Server).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> > LogType: syslog
> > LogLength: 56780
> > Log Contents:
> > 2016-01-19 20:04:11,329 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> > application appattempt_1453244277886_0001_000001
> > 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:11,765 WARN [main]
> org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> builtin-java
> > classes where applicable
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> > Service: , Ident:
> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> > 2016-01-19 20:04:11,801 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
> attempts: 2
> > for application: 1. Attempt num: 1 is last retry: false
> > 2016-01-19 20:04:11,806 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> > newApiCommitter.
> > 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:12,464 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> > config null
> > 2016-01-19 20:04:12,526 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> > 2016-01-19 20:04:12,548 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> > 2016-01-19 20:04:12,549 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> > 2016-01-19 20:04:12,550 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> > 2016-01-19 20:04:12,551 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> > 2016-01-19 20:04:12,552 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> > 2016-01-19 20:04:12,557 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> > 2016-01-19 20:04:12,558 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> > 2016-01-19 20:04:12,559 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for
> > class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
> after
> > creating 488, Expected: 504
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> > setting permissions to : 504, rwxrwx---
> > 2016-01-19 20:04:12,731 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> > 2016-01-19 20:04:12,956 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period
> > at 10 second(s).
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> > system started
> > 2016-01-19 20:04:13,026 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> > job_1453244277886_0001 to jobTokenSecretManager
> > 2016-01-19 20:04:13,139 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> > job_1453244277886_0001 because: not enabled;
> > 2016-01-19 20:04:13,154 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> > job_1453244277886_0001 = 343691. Number of splits = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
> for
> > job job_1453244277886_0001 = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from NEW to INITED
> > 2016-01-19 20:04:13,157 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> > normal, non-uberized, multi-container job job_1453244277886_0001.
> > 2016-01-19 20:04:13,186 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> > 2016-01-19 20:04:13,237 INFO [main]
> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
> server
> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> > 2016-01-19 20:04:13,239 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> > MRClientService at jose-ubuntu/127.0.0.1:56461
> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2016-01-19 20:04:13,304 INFO [main]
> org.apache.hadoop.http.HttpRequestLog:
> > Http request log for http.requests.mapreduce is not defined
> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context mapreduce
> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context static
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /mapreduce/*
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /ws/*
> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Jetty bound to port 44070
> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> >
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:44070
> > 2016-01-19 20:04:13,647 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Web app /mapreduce started at 44070
> > 2016-01-19 20:04:13,956 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Registered webapp guice modules
> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> JOB_CREATE
> > job_1453244277886_0001
> > 2016-01-19 20:04:13,961 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > nodeBlacklistingEnabled:true
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > maxTaskFailuresPerNode is 3
> > 2016-01-19 20:04:13,988 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > blacklistDisablePercent is 33
> > 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:14,062 INFO [main]
> org.apache.hadoop.yarn.client.RMProxy:
> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > maxContainerCapability: 2000
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
> default
> > 2016-01-19 20:04:14,162 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> > limit on the thread pool size is 500
> > 2016-01-19 20:04:14,164 INFO [main]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > yarn.client.max-nodemanagers-proxies : 500
> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from INITED to SETUP
> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_SETUP
> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,233 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > mapResourceReqt:512
> > 2016-01-19 20:04:14,245 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > reduceResourceReqt:512
> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
> Writer
> > setup for JobId: job_1453244277886_0001, File:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> > HostLocal:0 RackLocal:0
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=1280
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000002 to
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> > file on the remote FS is
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> > file on the remote FS is
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> > tokens and #1 secret keys for NM use for launching container
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> > containertokens_dob is 1
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle
> > token in serviceData
> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > Opening proxy : localhost:35711
> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_0
> > : 13562
> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_0] using containerId:
> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
> RUNNING
> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000002
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_0: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:18,327 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> > node localhost
> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:18,329 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000003,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000003 to
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_1
> > : 13562
> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_1] using containerId:
> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000003
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_1: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:21,313 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> > node localhost
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:21,314 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000004,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000004 to
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_2
> > : 13562
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_2] using containerId:
> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000004
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_2: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> > node localhost
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
> host
> > localhost
> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:24,343 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=1
> > blacklistRemovals=0
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=0
> > blacklistRemovals=1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000005,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000005 to
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_3
> > : 13562
> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_3] using containerId:
> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000005
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_3: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
> Tasks: 1
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> > failed. failedMaps:1 failedReduces:0
> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> > KILL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to KILLED
> > 2016-01-19 20:04:28,383 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
> the
> > event EventType: CONTAINER_DEALLOCATE
> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> > deallocate container for task attemptId
> > attempt_1453244277886_0001_r_000000_0
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
> KILLED
> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_ABORT
> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly
> so
> > this is the last retry
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> > isAMLastRetry: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> RMCommunicator
> > notified that shouldUnregistered is: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
> isAMLastRetry:
> > true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> > JobHistoryEventHandler notified that forceJobCompletion is true
> > 2016-01-19 20:04:28,434 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> > services
> > 2016-01-19 20:04:28,435 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> > JobHistoryEventHandler. Size of the outstanding queue size is 0
> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold reached. Scheduling reduces.
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> > assigned. Ramping up all remaining reduces:1
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> > 2016-01-19 20:04:30,071 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> > JobHistoryEventHandler. super.stop()
> > 2016-01-19 20:04:30,078 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> > diagnostics to Task failed task_1453244277886_0001_m_000000
> > Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> > 2016-01-19 20:04:30,080 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
> is
> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
> > 2016-01-19 20:04:30,094 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> > application to be successfully unregistered.
> > 2016-01-19 20:04:31,099 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0
> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> > RackLocal:0
> > 2016-01-19 20:04:31,104 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
> directory
> > hdfs://hdnode01:54310
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> > Stopping server on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> > TaskHeartbeatHandler thread interrupted
> >
> >
> > Jps results, i believe that everything is ok, right?:
> > 21267 DataNode
> > 21609 ResourceManager
> > 21974 JobHistoryServer
> > 21735 NodeManager
> > 24546 Jps
> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> > 21121 NameNode
> > 22098 QuorumPeerMain
> > 21456 SecondaryNameNode
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks Namikaze for keep trying, don't give up!! :D

- I have these lines in *$HOME/.bashrc*


export HADOOP_PREFIX=/usr/local/hadoop

# Others variables

export HADOOP_COMMON_HOME=${HADOOP_PREFIX}

export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}

export HADOOP_HDFS_HOME=${HADOOP_PREFIX}

export HADOOP_YARN_HOME=${HADOOP_PREFIX}


  - in *hadoop-env.sh* i have:

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}


  - I read that SO question and all answers to it. The only useful answer
in my opinion was checking yarn classpath. I have three times the following
line:

/usr/local/hadoop/etc/hadoop:


I put yarn.application.classpath on yarn-site.xml because i don't know any
other way to fix it, with the value recomended for default in this
<https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
(see for yarn.application.classpath):


$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*


But the classpath remains the same. And i can't find any other way to fix
it. Maybe this is the problem?


 - yarn.log-aggregation-enable was always set to true. I couldn't find
nothing in *datanodes logs*, here they are:

2016-01-25 21:13:07,006 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.4.0
STARTUP_MSG:   classpath =
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common
-r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2016-01-25 21:13:07,015 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX
signal handlers for [TERM, HUP, INT]
2016-01-25 21:13:07,188 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2016-01-25 21:13:07,648 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2016-01-25 21:13:07,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname
is localhost
2016-01-25 21:13:07,728 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode
with maxLockedMemory = 0
2016-01-25 21:13:07,757 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming
server at /0.0.0.0:50010
2016-01-25 21:13:07,760 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.datanode is not defined
2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context datanode
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50075
2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
2016-01-25 21:13:08,137 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting
Socket Reader #1 for port 50020
2016-01-25 21:13:08,288 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at
/0.0.0.0:50020
2016-01-25 21:13:08,300 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request
received for nameservices: null
2016-01-25 21:13:08,316 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting
BPOfferServices for nameservices: <default>
2016-01-25 21:13:08,321 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:08,325 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
<registering> (Datanode Uuid unassigned) service to
hdnode01/192.168.0.10:54310 starting to offer service
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2016-01-25 21:13:08,719 INFO
org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55
and name-node layout version: -56
2016-01-25 21:13:08,828 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename
10365@jose-ubuntu
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data is not formatted
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845
is not formatted.
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool
BP-216406264-127.0.0.1-1453767164845 directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
2016-01-25 21:13:09,072 INFO
org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files
from trash.
2016-01-25 21:13:09,198 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
2016-01-25 21:13:09,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and
persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
2016-01-25 21:13:09,268 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:09,270 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType:
DISK
2016-01-25 21:13:09,279 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Registered FSDatasetState MBean
2016-01-25 21:13:09,282 INFO
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic
Directory Tree Verification scan starting at 1453784080282 with
interval 21600000
2016-01-25 21:13:09,283 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,284 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,299 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on
/usr/local/hadoop/dfs/name/data/current: 15ms
2016-01-25 21:13:09,300 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to scan all replicas for block pool
BP-216406264-127.0.0.1-1453767164845: 17ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time to add replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current: 0ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to add all replicas to map: 1ms
2016-01-25 21:13:09,305 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 beginning handshake with NN
2016-01-25 21:13:09,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 successfully registered with NN
2016-01-25 21:13:09,356 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode
hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec
 BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of
10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid
6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE
Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode
Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310
2016-01-25 21:13:09,487 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0
blocks total. Took 1 msec to generate and 42 msecs for RPC and NN
processing.  Got back commands none
2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlockMap
2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max
memory 1.8 GB = 9.1 MB
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2016-01-25 21:13:09,495 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic
Block Verification Scanner initialized with interval 504 hours for
block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,499 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added
bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new
size=1
2016-01-25 21:13:32,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src:
/192.168.0.10:58649 dest: /192.168.0.10:50010
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration:
98632367
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:13:34,291 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
2016-01-25 21:14:10,176 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src:
/192.168.0.10:58663 dest: /192.168.0.10:50010
2016-01-25 21:14:10,220 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
42378742
2016-01-25 21:14:10,221 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,714 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src:
/192.168.0.10:58664 dest: /192.168.0.10:50010
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
2656758
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,853 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src:
/192.168.0.10:58665 dest: /192.168.0.10:50010
2016-01-25 21:14:10,860 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
3257396
2016-01-25 21:14:10,861 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:11,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src:
/192.168.0.10:58666 dest: /192.168.0.10:50010
2016-01-25 21:14:11,726 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
6180229
2016-01-25 21:14:11,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:14,298 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
2016-01-25 21:14:14,299 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
2016-01-25 21:14:16,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
2878920
2016-01-25 21:14:16,253 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
236423
2016-01-25 21:14:16,312 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
909236
2016-01-25 21:14:16,364 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
1489437
2016-01-25 21:14:20,174 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
899980
2016-01-25 21:14:22,692 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src:
/192.168.0.10:58679 dest: /192.168.0.10:50010
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
60114851
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:24,319 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
2016-01-25 21:14:25,808 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src:
/192.168.0.10:58681 dest: /192.168.0.10:50010
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
9975409048
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,066 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src:
/192.168.0.10:58682 dest: /192.168.0.10:50010
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
4992595
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,548 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
497225
2016-01-25 21:14:36,564 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src:
/192.168.0.10:58684 dest: /192.168.0.10:50010
2016-01-25 21:14:36,572 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration:
2649337
2016-01-25 21:14:36,573 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
379439
2016-01-25 21:14:36,638 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src:
/192.168.0.10:58685 dest: /192.168.0.10:50010
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration:
3135698
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:39,335 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
2016-01-25 21:14:39,336 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
2016-01-25 21:14:39,337 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
2016-01-25 21:14:39,338 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
2016-01-25 21:14:39,376 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
for deletion
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
for deletion
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
2016-01-25 21:14:44,797 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src:
/192.168.0.10:58688 dest: /192.168.0.10:50010
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration:
34522284
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:49,343 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
2016-01-25 21:16:33,785 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
284719
2016-01-25 21:16:36,371 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
for deletion
2016-01-25 21:16:36,372 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832




2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> It could be a classpath issue (see
> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
> this is the case.
>
> You could drill down to the exact root cause by looking at the
> datanode logs (see
>
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
> )
> But I'm not sure we would get another error than what we had...
>
> Check if your application has the correct values for the following
> variables:
> HADOOP_CONF_DIR
> HADOOP_COMMON_HOME
> HADOOP_HDFS_HOME
> HADOOP_MAPRED_HOME
> HADOOP_YARN_HOME
>
> I'm afraid I can't help you much more than this myself, sorry...
>
> LLoyd
>
> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi guys, thanks for your answers.
> >
> > Wordcount logs:
> >
> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> > hdnode01/192.168.0.10:8050
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
> native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> >
> >
> > Container: container_1453244277886_0001_01_000002 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000003 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000004 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000005 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000001 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 929
> > Log Contents:
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > log4j:WARN No appenders could be found for logger
> > (org.apache.hadoop.ipc.Server).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> > LogType: syslog
> > LogLength: 56780
> > Log Contents:
> > 2016-01-19 20:04:11,329 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> > application appattempt_1453244277886_0001_000001
> > 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:11,765 WARN [main]
> org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> builtin-java
> > classes where applicable
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> > Service: , Ident:
> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> > 2016-01-19 20:04:11,801 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
> attempts: 2
> > for application: 1. Attempt num: 1 is last retry: false
> > 2016-01-19 20:04:11,806 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> > newApiCommitter.
> > 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:12,464 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> > config null
> > 2016-01-19 20:04:12,526 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> > 2016-01-19 20:04:12,548 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> > 2016-01-19 20:04:12,549 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> > 2016-01-19 20:04:12,550 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> > 2016-01-19 20:04:12,551 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> > 2016-01-19 20:04:12,552 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> > 2016-01-19 20:04:12,557 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> > 2016-01-19 20:04:12,558 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> > 2016-01-19 20:04:12,559 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for
> > class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
> after
> > creating 488, Expected: 504
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> > setting permissions to : 504, rwxrwx---
> > 2016-01-19 20:04:12,731 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> > 2016-01-19 20:04:12,956 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period
> > at 10 second(s).
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> > system started
> > 2016-01-19 20:04:13,026 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> > job_1453244277886_0001 to jobTokenSecretManager
> > 2016-01-19 20:04:13,139 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> > job_1453244277886_0001 because: not enabled;
> > 2016-01-19 20:04:13,154 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> > job_1453244277886_0001 = 343691. Number of splits = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
> for
> > job job_1453244277886_0001 = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from NEW to INITED
> > 2016-01-19 20:04:13,157 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> > normal, non-uberized, multi-container job job_1453244277886_0001.
> > 2016-01-19 20:04:13,186 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> > 2016-01-19 20:04:13,237 INFO [main]
> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
> server
> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> > 2016-01-19 20:04:13,239 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> > MRClientService at jose-ubuntu/127.0.0.1:56461
> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2016-01-19 20:04:13,304 INFO [main]
> org.apache.hadoop.http.HttpRequestLog:
> > Http request log for http.requests.mapreduce is not defined
> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context mapreduce
> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context static
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /mapreduce/*
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /ws/*
> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Jetty bound to port 44070
> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> >
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:44070
> > 2016-01-19 20:04:13,647 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Web app /mapreduce started at 44070
> > 2016-01-19 20:04:13,956 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Registered webapp guice modules
> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> JOB_CREATE
> > job_1453244277886_0001
> > 2016-01-19 20:04:13,961 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > nodeBlacklistingEnabled:true
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > maxTaskFailuresPerNode is 3
> > 2016-01-19 20:04:13,988 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > blacklistDisablePercent is 33
> > 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:14,062 INFO [main]
> org.apache.hadoop.yarn.client.RMProxy:
> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > maxContainerCapability: 2000
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
> default
> > 2016-01-19 20:04:14,162 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> > limit on the thread pool size is 500
> > 2016-01-19 20:04:14,164 INFO [main]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > yarn.client.max-nodemanagers-proxies : 500
> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from INITED to SETUP
> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_SETUP
> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,233 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > mapResourceReqt:512
> > 2016-01-19 20:04:14,245 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > reduceResourceReqt:512
> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
> Writer
> > setup for JobId: job_1453244277886_0001, File:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> > HostLocal:0 RackLocal:0
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=1280
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000002 to
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> > file on the remote FS is
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> > file on the remote FS is
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> > tokens and #1 secret keys for NM use for launching container
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> > containertokens_dob is 1
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle
> > token in serviceData
> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > Opening proxy : localhost:35711
> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_0
> > : 13562
> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_0] using containerId:
> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
> RUNNING
> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000002
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_0: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:18,327 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> > node localhost
> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:18,329 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000003,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000003 to
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_1
> > : 13562
> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_1] using containerId:
> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000003
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_1: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:21,313 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> > node localhost
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:21,314 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000004,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000004 to
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_2
> > : 13562
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_2] using containerId:
> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000004
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_2: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> > node localhost
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
> host
> > localhost
> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:24,343 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=1
> > blacklistRemovals=0
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=0
> > blacklistRemovals=1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000005,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000005 to
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_3
> > : 13562
> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_3] using containerId:
> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000005
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_3: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
> Tasks: 1
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> > failed. failedMaps:1 failedReduces:0
> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> > KILL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to KILLED
> > 2016-01-19 20:04:28,383 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
> the
> > event EventType: CONTAINER_DEALLOCATE
> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> > deallocate container for task attemptId
> > attempt_1453244277886_0001_r_000000_0
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
> KILLED
> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_ABORT
> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly
> so
> > this is the last retry
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> > isAMLastRetry: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> RMCommunicator
> > notified that shouldUnregistered is: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
> isAMLastRetry:
> > true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> > JobHistoryEventHandler notified that forceJobCompletion is true
> > 2016-01-19 20:04:28,434 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> > services
> > 2016-01-19 20:04:28,435 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> > JobHistoryEventHandler. Size of the outstanding queue size is 0
> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold reached. Scheduling reduces.
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> > assigned. Ramping up all remaining reduces:1
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> > 2016-01-19 20:04:30,071 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> > JobHistoryEventHandler. super.stop()
> > 2016-01-19 20:04:30,078 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> > diagnostics to Task failed task_1453244277886_0001_m_000000
> > Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> > 2016-01-19 20:04:30,080 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
> is
> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
> > 2016-01-19 20:04:30,094 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> > application to be successfully unregistered.
> > 2016-01-19 20:04:31,099 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0
> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> > RackLocal:0
> > 2016-01-19 20:04:31,104 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
> directory
> > hdfs://hdnode01:54310
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> > Stopping server on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> > TaskHeartbeatHandler thread interrupted
> >
> >
> > Jps results, i believe that everything is ok, right?:
> > 21267 DataNode
> > 21609 ResourceManager
> > 21974 JobHistoryServer
> > 21735 NodeManager
> > 24546 Jps
> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> > 21121 NameNode
> > 22098 QuorumPeerMain
> > 21456 SecondaryNameNode
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks Namikaze for keep trying, don't give up!! :D

- I have these lines in *$HOME/.bashrc*


export HADOOP_PREFIX=/usr/local/hadoop

# Others variables

export HADOOP_COMMON_HOME=${HADOOP_PREFIX}

export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}

export HADOOP_HDFS_HOME=${HADOOP_PREFIX}

export HADOOP_YARN_HOME=${HADOOP_PREFIX}


  - in *hadoop-env.sh* i have:

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}


  - I read that SO question and all answers to it. The only useful answer
in my opinion was checking yarn classpath. I have three times the following
line:

/usr/local/hadoop/etc/hadoop:


I put yarn.application.classpath on yarn-site.xml because i don't know any
other way to fix it, with the value recomended for default in this
<https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
(see for yarn.application.classpath):


$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*


But the classpath remains the same. And i can't find any other way to fix
it. Maybe this is the problem?


 - yarn.log-aggregation-enable was always set to true. I couldn't find
nothing in *datanodes logs*, here they are:

2016-01-25 21:13:07,006 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.4.0
STARTUP_MSG:   classpath =
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common
-r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2016-01-25 21:13:07,015 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX
signal handlers for [TERM, HUP, INT]
2016-01-25 21:13:07,188 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2016-01-25 21:13:07,648 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2016-01-25 21:13:07,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname
is localhost
2016-01-25 21:13:07,728 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode
with maxLockedMemory = 0
2016-01-25 21:13:07,757 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming
server at /0.0.0.0:50010
2016-01-25 21:13:07,760 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.datanode is not defined
2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context datanode
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50075
2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
2016-01-25 21:13:08,137 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting
Socket Reader #1 for port 50020
2016-01-25 21:13:08,288 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at
/0.0.0.0:50020
2016-01-25 21:13:08,300 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request
received for nameservices: null
2016-01-25 21:13:08,316 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting
BPOfferServices for nameservices: <default>
2016-01-25 21:13:08,321 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:08,325 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
<registering> (Datanode Uuid unassigned) service to
hdnode01/192.168.0.10:54310 starting to offer service
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2016-01-25 21:13:08,719 INFO
org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55
and name-node layout version: -56
2016-01-25 21:13:08,828 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename
10365@jose-ubuntu
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data is not formatted
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845
is not formatted.
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool
BP-216406264-127.0.0.1-1453767164845 directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
2016-01-25 21:13:09,072 INFO
org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files
from trash.
2016-01-25 21:13:09,198 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
2016-01-25 21:13:09,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and
persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
2016-01-25 21:13:09,268 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:09,270 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType:
DISK
2016-01-25 21:13:09,279 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Registered FSDatasetState MBean
2016-01-25 21:13:09,282 INFO
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic
Directory Tree Verification scan starting at 1453784080282 with
interval 21600000
2016-01-25 21:13:09,283 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,284 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,299 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on
/usr/local/hadoop/dfs/name/data/current: 15ms
2016-01-25 21:13:09,300 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to scan all replicas for block pool
BP-216406264-127.0.0.1-1453767164845: 17ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time to add replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current: 0ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to add all replicas to map: 1ms
2016-01-25 21:13:09,305 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 beginning handshake with NN
2016-01-25 21:13:09,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 successfully registered with NN
2016-01-25 21:13:09,356 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode
hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec
 BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of
10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid
6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE
Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode
Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310
2016-01-25 21:13:09,487 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0
blocks total. Took 1 msec to generate and 42 msecs for RPC and NN
processing.  Got back commands none
2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlockMap
2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max
memory 1.8 GB = 9.1 MB
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2016-01-25 21:13:09,495 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic
Block Verification Scanner initialized with interval 504 hours for
block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,499 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added
bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new
size=1
2016-01-25 21:13:32,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src:
/192.168.0.10:58649 dest: /192.168.0.10:50010
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration:
98632367
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:13:34,291 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
2016-01-25 21:14:10,176 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src:
/192.168.0.10:58663 dest: /192.168.0.10:50010
2016-01-25 21:14:10,220 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
42378742
2016-01-25 21:14:10,221 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,714 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src:
/192.168.0.10:58664 dest: /192.168.0.10:50010
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
2656758
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,853 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src:
/192.168.0.10:58665 dest: /192.168.0.10:50010
2016-01-25 21:14:10,860 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
3257396
2016-01-25 21:14:10,861 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:11,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src:
/192.168.0.10:58666 dest: /192.168.0.10:50010
2016-01-25 21:14:11,726 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
6180229
2016-01-25 21:14:11,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:14,298 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
2016-01-25 21:14:14,299 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
2016-01-25 21:14:16,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
2878920
2016-01-25 21:14:16,253 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
236423
2016-01-25 21:14:16,312 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
909236
2016-01-25 21:14:16,364 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
1489437
2016-01-25 21:14:20,174 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
899980
2016-01-25 21:14:22,692 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src:
/192.168.0.10:58679 dest: /192.168.0.10:50010
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
60114851
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:24,319 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
2016-01-25 21:14:25,808 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src:
/192.168.0.10:58681 dest: /192.168.0.10:50010
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
9975409048
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,066 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src:
/192.168.0.10:58682 dest: /192.168.0.10:50010
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
4992595
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,548 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
497225
2016-01-25 21:14:36,564 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src:
/192.168.0.10:58684 dest: /192.168.0.10:50010
2016-01-25 21:14:36,572 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration:
2649337
2016-01-25 21:14:36,573 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
379439
2016-01-25 21:14:36,638 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src:
/192.168.0.10:58685 dest: /192.168.0.10:50010
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration:
3135698
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:39,335 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
2016-01-25 21:14:39,336 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
2016-01-25 21:14:39,337 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
2016-01-25 21:14:39,338 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
2016-01-25 21:14:39,376 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
for deletion
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
for deletion
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
2016-01-25 21:14:44,797 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src:
/192.168.0.10:58688 dest: /192.168.0.10:50010
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration:
34522284
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:49,343 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
2016-01-25 21:16:33,785 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
284719
2016-01-25 21:16:36,371 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
for deletion
2016-01-25 21:16:36,372 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832




2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> It could be a classpath issue (see
> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
> this is the case.
>
> You could drill down to the exact root cause by looking at the
> datanode logs (see
>
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
> )
> But I'm not sure we would get another error than what we had...
>
> Check if your application has the correct values for the following
> variables:
> HADOOP_CONF_DIR
> HADOOP_COMMON_HOME
> HADOOP_HDFS_HOME
> HADOOP_MAPRED_HOME
> HADOOP_YARN_HOME
>
> I'm afraid I can't help you much more than this myself, sorry...
>
> LLoyd
>
> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi guys, thanks for your answers.
> >
> > Wordcount logs:
> >
> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> > hdnode01/192.168.0.10:8050
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
> native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> >
> >
> > Container: container_1453244277886_0001_01_000002 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000003 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000004 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000005 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000001 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 929
> > Log Contents:
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > log4j:WARN No appenders could be found for logger
> > (org.apache.hadoop.ipc.Server).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> > LogType: syslog
> > LogLength: 56780
> > Log Contents:
> > 2016-01-19 20:04:11,329 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> > application appattempt_1453244277886_0001_000001
> > 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:11,765 WARN [main]
> org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> builtin-java
> > classes where applicable
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> > Service: , Ident:
> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> > 2016-01-19 20:04:11,801 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
> attempts: 2
> > for application: 1. Attempt num: 1 is last retry: false
> > 2016-01-19 20:04:11,806 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> > newApiCommitter.
> > 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:12,464 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> > config null
> > 2016-01-19 20:04:12,526 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> > 2016-01-19 20:04:12,548 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> > 2016-01-19 20:04:12,549 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> > 2016-01-19 20:04:12,550 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> > 2016-01-19 20:04:12,551 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> > 2016-01-19 20:04:12,552 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> > 2016-01-19 20:04:12,557 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> > 2016-01-19 20:04:12,558 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> > 2016-01-19 20:04:12,559 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for
> > class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
> after
> > creating 488, Expected: 504
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> > setting permissions to : 504, rwxrwx---
> > 2016-01-19 20:04:12,731 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> > 2016-01-19 20:04:12,956 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period
> > at 10 second(s).
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> > system started
> > 2016-01-19 20:04:13,026 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> > job_1453244277886_0001 to jobTokenSecretManager
> > 2016-01-19 20:04:13,139 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> > job_1453244277886_0001 because: not enabled;
> > 2016-01-19 20:04:13,154 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> > job_1453244277886_0001 = 343691. Number of splits = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
> for
> > job job_1453244277886_0001 = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from NEW to INITED
> > 2016-01-19 20:04:13,157 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> > normal, non-uberized, multi-container job job_1453244277886_0001.
> > 2016-01-19 20:04:13,186 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> > 2016-01-19 20:04:13,237 INFO [main]
> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
> server
> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> > 2016-01-19 20:04:13,239 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> > MRClientService at jose-ubuntu/127.0.0.1:56461
> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2016-01-19 20:04:13,304 INFO [main]
> org.apache.hadoop.http.HttpRequestLog:
> > Http request log for http.requests.mapreduce is not defined
> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context mapreduce
> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context static
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /mapreduce/*
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /ws/*
> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Jetty bound to port 44070
> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> >
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:44070
> > 2016-01-19 20:04:13,647 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Web app /mapreduce started at 44070
> > 2016-01-19 20:04:13,956 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Registered webapp guice modules
> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> JOB_CREATE
> > job_1453244277886_0001
> > 2016-01-19 20:04:13,961 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > nodeBlacklistingEnabled:true
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > maxTaskFailuresPerNode is 3
> > 2016-01-19 20:04:13,988 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > blacklistDisablePercent is 33
> > 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:14,062 INFO [main]
> org.apache.hadoop.yarn.client.RMProxy:
> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > maxContainerCapability: 2000
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
> default
> > 2016-01-19 20:04:14,162 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> > limit on the thread pool size is 500
> > 2016-01-19 20:04:14,164 INFO [main]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > yarn.client.max-nodemanagers-proxies : 500
> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from INITED to SETUP
> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_SETUP
> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,233 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > mapResourceReqt:512
> > 2016-01-19 20:04:14,245 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > reduceResourceReqt:512
> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
> Writer
> > setup for JobId: job_1453244277886_0001, File:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> > HostLocal:0 RackLocal:0
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=1280
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000002 to
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> > file on the remote FS is
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> > file on the remote FS is
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> > tokens and #1 secret keys for NM use for launching container
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> > containertokens_dob is 1
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle
> > token in serviceData
> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > Opening proxy : localhost:35711
> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_0
> > : 13562
> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_0] using containerId:
> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
> RUNNING
> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000002
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_0: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:18,327 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> > node localhost
> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:18,329 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000003,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000003 to
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_1
> > : 13562
> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_1] using containerId:
> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000003
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_1: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:21,313 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> > node localhost
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:21,314 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000004,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000004 to
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_2
> > : 13562
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_2] using containerId:
> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000004
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_2: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> > node localhost
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
> host
> > localhost
> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:24,343 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=1
> > blacklistRemovals=0
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=0
> > blacklistRemovals=1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000005,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000005 to
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_3
> > : 13562
> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_3] using containerId:
> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000005
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_3: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
> Tasks: 1
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> > failed. failedMaps:1 failedReduces:0
> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> > KILL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to KILLED
> > 2016-01-19 20:04:28,383 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
> the
> > event EventType: CONTAINER_DEALLOCATE
> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> > deallocate container for task attemptId
> > attempt_1453244277886_0001_r_000000_0
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
> KILLED
> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_ABORT
> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly
> so
> > this is the last retry
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> > isAMLastRetry: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> RMCommunicator
> > notified that shouldUnregistered is: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
> isAMLastRetry:
> > true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> > JobHistoryEventHandler notified that forceJobCompletion is true
> > 2016-01-19 20:04:28,434 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> > services
> > 2016-01-19 20:04:28,435 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> > JobHistoryEventHandler. Size of the outstanding queue size is 0
> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold reached. Scheduling reduces.
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> > assigned. Ramping up all remaining reduces:1
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> > 2016-01-19 20:04:30,071 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> > JobHistoryEventHandler. super.stop()
> > 2016-01-19 20:04:30,078 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> > diagnostics to Task failed task_1453244277886_0001_m_000000
> > Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> > 2016-01-19 20:04:30,080 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
> is
> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
> > 2016-01-19 20:04:30,094 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> > application to be successfully unregistered.
> > 2016-01-19 20:04:31,099 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0
> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> > RackLocal:0
> > 2016-01-19 20:04:31,104 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
> directory
> > hdfs://hdnode01:54310
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> > Stopping server on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> > TaskHeartbeatHandler thread interrupted
> >
> >
> > Jps results, i believe that everything is ok, right?:
> > 21267 DataNode
> > 21609 ResourceManager
> > 21974 JobHistoryServer
> > 21735 NodeManager
> > 24546 Jps
> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> > 21121 NameNode
> > 22098 QuorumPeerMain
> > 21456 SecondaryNameNode
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks Namikaze for keep trying, don't give up!! :D

- I have these lines in *$HOME/.bashrc*


export HADOOP_PREFIX=/usr/local/hadoop

# Others variables

export HADOOP_COMMON_HOME=${HADOOP_PREFIX}

export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}

export HADOOP_HDFS_HOME=${HADOOP_PREFIX}

export HADOOP_YARN_HOME=${HADOOP_PREFIX}


  - in *hadoop-env.sh* i have:

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/local/hadoop/etc/hadoop"}


  - I read that SO question and all answers to it. The only useful answer
in my opinion was checking yarn classpath. I have three times the following
line:

/usr/local/hadoop/etc/hadoop:


I put yarn.application.classpath on yarn-site.xml because i don't know any
other way to fix it, with the value recomended for default in this
<https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml>
(see for yarn.application.classpath):


$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*


But the classpath remains the same. And i can't find any other way to fix
it. Maybe this is the problem?


 - yarn.log-aggregation-enable was always set to true. I couldn't find
nothing in *datanodes logs*, here they are:

2016-01-25 21:13:07,006 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = jose-ubuntu/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.4.0
STARTUP_MSG:   classpath =
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_salida_grafo_caminos_navegacionales-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/procesamiento_grafo_wikiquote-0.0.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/generacion_grafo_wikiquote-0.0.1-SNAPSHOT.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common
-r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2016-01-25 21:13:07,015 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX
signal handlers for [TERM, HUP, INT]
2016-01-25 21:13:07,188 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:07,356 WARN org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2016-01-25 21:13:07,648 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-25 21:13:07,723 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2016-01-25 21:13:07,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname
is localhost
2016-01-25 21:13:07,728 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode
with maxLockedMemory = 0
2016-01-25 21:13:07,757 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming
server at /0.0.0.0:50010
2016-01-25 21:13:07,760 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2016-01-25 21:13:07,839 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-25 21:13:07,843 INFO org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.datanode is not defined
2016-01-25 21:13:07,853 INFO org.apache.hadoop.http.HttpServer2: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context datanode
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2016-01-25 21:13:07,856 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2016-01-25 21:13:07,872 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2016-01-25 21:13:07,875 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50075
2016-01-25 21:13:07,875 INFO org.mortbay.log: jetty-6.1.26
2016-01-25 21:13:08,137 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2016-01-25 21:13:08,225 INFO org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-25 21:13:08,239 INFO org.apache.hadoop.ipc.Server: Starting
Socket Reader #1 for port 50020
2016-01-25 21:13:08,288 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at
/0.0.0.0:50020
2016-01-25 21:13:08,300 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request
received for nameservices: null
2016-01-25 21:13:08,316 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting
BPOfferServices for nameservices: <default>
2016-01-25 21:13:08,321 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:08,325 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
<registering> (Datanode Uuid unassigned) service to
hdnode01/192.168.0.10:54310 starting to offer service
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2016-01-25 21:13:08,326 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2016-01-25 21:13:08,719 INFO
org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55
and name-node layout version: -56
2016-01-25 21:13:08,828 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/hadoop/dfs/name/data/in_use.lock acquired by nodename
10365@jose-ubuntu
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data is not formatted
2016-01-25 21:13:08,833 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage
directories for bpid BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,017 INFO
org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845
is not formatted.
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-01-25 21:13:09,018 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool
BP-216406264-127.0.0.1-1453767164845 directory
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current
2016-01-25 21:13:09,072 INFO
org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files
from trash.
2016-01-25 21:13:09,198 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=1479061672;bpid=BP-216406264-127.0.0.1-1453767164845;lv=-55;nsInfo=lv=-56;cid=CID-8fa0e75b-6942-452a-a4e6-8cd0c24de652;nsid=1479061672;c=0;bpid=BP-216406264-127.0.0.1-1453767164845;dnuuid=null
2016-01-25 21:13:09,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and
persisted new Datanode UUID 6b4236c8-2183-49ba-84d7-a273298ba37a
2016-01-25 21:13:09,268 WARN
org.apache.hadoop.hdfs.server.common.Util: Path
/usr/local/hadoop/dfs/name/data should be specified as a URI in
configuration files. Please update hdfs configuration.
2016-01-25 21:13:09,270 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Added volume - /usr/local/hadoop/dfs/name/data/current, StorageType:
DISK
2016-01-25 21:13:09,279 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Registered FSDatasetState MBean
2016-01-25 21:13:09,282 INFO
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic
Directory Tree Verification scan starting at 1453784080282 with
interval 21600000
2016-01-25 21:13:09,283 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,284 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Scanning block pool BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,299 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time taken to scan block pool BP-216406264-127.0.0.1-1453767164845 on
/usr/local/hadoop/dfs/name/data/current: 15ms
2016-01-25 21:13:09,300 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to scan all replicas for block pool
BP-216406264-127.0.0.1-1453767164845: 17ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current...
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time to add replicas to map for block pool
BP-216406264-127.0.0.1-1453767164845 on volume
/usr/local/hadoop/dfs/name/data/current: 0ms
2016-01-25 21:13:09,301 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to add all replicas to map: 1ms
2016-01-25 21:13:09,305 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 beginning handshake with NN
2016-01-25 21:13:09,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid null) service to
hdnode01/192.168.0.10:54310 successfully registered with NN
2016-01-25 21:13:09,356 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode
hdnode01/192.168.0.10:54310 using DELETEREPORT_INTERVAL of 300000 msec
 BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of
10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool
BP-216406264-127.0.0.1-1453767164845 (Datanode Uuid
6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310 trying to claim ACTIVE state with txid=1
2016-01-25 21:13:09,444 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE
Namenode Block pool BP-216406264-127.0.0.1-1453767164845 (Datanode
Uuid 6b4236c8-2183-49ba-84d7-a273298ba37a) service to
hdnode01/192.168.0.10:54310
2016-01-25 21:13:09,487 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0
blocks total. Took 1 msec to generate and 42 msecs for RPC and NN
processing.  Got back commands none
2016-01-25 21:13:09,492 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlockMap
2016-01-25 21:13:09,493 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: 0.5% max
memory 1.8 GB = 9.1 MB
2016-01-25 21:13:09,494 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2016-01-25 21:13:09,495 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic
Block Verification Scanner initialized with interval 504 hours for
block pool BP-216406264-127.0.0.1-1453767164845
2016-01-25 21:13:09,499 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added
bpid=BP-216406264-127.0.0.1-1453767164845 to blockPoolScannerMap, new
size=1
2016-01-25 21:13:32,355 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001 src:
/192.168.0.10:58649 dest: /192.168.0.10:50010
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58649, dest: /192.168.0.10:50010, bytes: 343691, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_538002429_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001, duration:
98632367
2016-01-25 21:13:32,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:13:34,291 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741825_1001
2016-01-25 21:14:10,176 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002 src:
/192.168.0.10:58663 dest: /192.168.0.10:50010
2016-01-25 21:14:10,220 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58663, dest: /192.168.0.10:50010, bytes: 270263, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
42378742
2016-01-25 21:14:10,221 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,714 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003 src:
/192.168.0.10:58664 dest: /192.168.0.10:50010
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58664, dest: /192.168.0.10:50010, bytes: 121, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
2656758
2016-01-25 21:14:10,721 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:10,853 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004 src:
/192.168.0.10:58665 dest: /192.168.0.10:50010
2016-01-25 21:14:10,860 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58665, dest: /192.168.0.10:50010, bytes: 26, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
3257396
2016-01-25 21:14:10,861 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:11,717 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005 src:
/192.168.0.10:58666 dest: /192.168.0.10:50010
2016-01-25 21:14:11,726 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58666, dest: /192.168.0.10:50010, bytes: 77957, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_342504113_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
6180229
2016-01-25 21:14:11,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:14,298 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005
2016-01-25 21:14:14,299 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002
2016-01-25 21:14:14,305 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004
2016-01-25 21:14:16,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 272375, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741826_1002, duration:
2878920
2016-01-25 21:14:16,253 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
236423
2016-01-25 21:14:16,312 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 125, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741827_1003, duration:
909236
2016-01-25 21:14:16,364 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58674, bytes: 78569, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_76231625_102, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741829_1005, duration:
1489437
2016-01-25 21:14:20,174 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58676, bytes: 30, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741828_1004, duration:
899980
2016-01-25 21:14:22,692 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006 src:
/192.168.0.10:58679 dest: /192.168.0.10:50010
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58679, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
60114851
2016-01-25 21:14:22,754 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:24,319 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006
2016-01-25 21:14:25,808 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007 src:
/192.168.0.10:58681 dest: /192.168.0.10:50010
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58681, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
9975409048
2016-01-25 21:14:35,846 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,066 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008 src:
/192.168.0.10:58682 dest: /192.168.0.10:50010
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58682, dest: /192.168.0.10:50010, bytes: 332, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
4992595
2016-01-25 21:14:36,075 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,548 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 21344, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007, duration:
497225
2016-01-25 21:14:36,564 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009 src:
/192.168.0.10:58684 dest: /192.168.0.10:50010
2016-01-25 21:14:36,572 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58684, dest: /192.168.0.10:50010, bytes: 21176, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009, duration:
2649337
2016-01-25 21:14:36,573 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:36,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58683, bytes: 93412, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741830_1006, duration:
379439
2016-01-25 21:14:36,638 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010 src:
/192.168.0.10:58685 dest: /192.168.0.10:50010
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58685, dest: /192.168.0.10:50010, bytes: 92684, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_694066886_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010, duration:
3135698
2016-01-25 21:14:36,646 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:39,335 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741833_1009
2016-01-25 21:14:39,336 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741831_1007
2016-01-25 21:14:39,337 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008
2016-01-25 21:14:39,338 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741834_1010
2016-01-25 21:14:39,376 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
for deletion
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741826_1002 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741826
2016-01-25 21:14:39,379 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741827_1003 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741827
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
for deletion
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741828_1004 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741828
2016-01-25 21:14:39,380 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
for deletion
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741829_1005 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741829
2016-01-25 21:14:39,381 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
for deletion
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741830_1006 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741830
2016-01-25 21:14:39,382 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741831_1007 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741831
2016-01-25 21:14:44,797 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011 src:
/192.168.0.10:58688 dest: /192.168.0.10:50010
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:58688, dest: /192.168.0.10:50010, bytes: 57450, op:
HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-433405715_88, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011, duration:
34522284
2016-01-25 21:14:44,834 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011,
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-01-25 21:14:49,343 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:
Verification succeeded for
BP-216406264-127.0.0.1-1453767164845:blk_1073741835_1011
2016-01-25 21:16:33,785 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.0.10:50010, dest: /192.168.0.10:58694, bytes: 336, op:
HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1832227986_1, offset: 0,
srvID: 6b4236c8-2183-49ba-84d7-a273298ba37a, blockid:
BP-216406264-127.0.0.1-1453767164845:blk_1073741832_1008, duration:
284719
2016-01-25 21:16:36,371 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Scheduling blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832
for deletion
2016-01-25 21:16:36,372 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
Deleted BP-216406264-127.0.0.1-1453767164845 blk_1073741832_1008 file
/usr/local/hadoop/dfs/name/data/current/BP-216406264-127.0.0.1-1453767164845/current/finalized/blk_1073741832




2016-01-21 18:52 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> It could be a classpath issue (see
> http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
> this is the case.
>
> You could drill down to the exact root cause by looking at the
> datanode logs (see
>
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E
> )
> But I'm not sure we would get another error than what we had...
>
> Check if your application has the correct values for the following
> variables:
> HADOOP_CONF_DIR
> HADOOP_COMMON_HOME
> HADOOP_HDFS_HOME
> HADOOP_MAPRED_HOME
> HADOOP_YARN_HOME
>
> I'm afraid I can't help you much more than this myself, sorry...
>
> LLoyd
>
> On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi guys, thanks for your answers.
> >
> > Wordcount logs:
> >
> > 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> > hdnode01/192.168.0.10:8050
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load
> native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ nano
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> > hduser@jose-ubuntu:/usr/local/hadoop$ cat
> >
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> >
> >
> > Container: container_1453244277886_0001_01_000002 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000003 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000004 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000005 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 45
> > Log Contents:
> > Error: Could not find or load main class 256
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> >
> >
> > Container: container_1453244277886_0001_01_000001 on localhost_35711
> > ======================================================================
> > LogType: stderr
> > LogLength: 929
> > Log Contents:
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > log4j:WARN No appenders could be found for logger
> > (org.apache.hadoop.ipc.Server).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> >
> > LogType: stdout
> > LogLength: 0
> > Log Contents:
> >
> > LogType: syslog
> > LogLength: 56780
> > Log Contents:
> > 2016-01-19 20:04:11,329 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> > application appattempt_1453244277886_0001_000001
> > 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:11,765 WARN [main]
> org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> builtin-java
> > classes where applicable
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> > 2016-01-19 20:04:11,776 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> > Service: , Ident:
> > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> > 2016-01-19 20:04:11,801 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max
> attempts: 2
> > for application: 1. Attempt num: 1 is last retry: false
> > 2016-01-19 20:04:11,806 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> > newApiCommitter.
> > 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:12,464 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> > config null
> > 2016-01-19 20:04:12,526 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> > 2016-01-19 20:04:12,548 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.jobhistory.EventType for class
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> > 2016-01-19 20:04:12,549 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> > 2016-01-19 20:04:12,550 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> > 2016-01-19 20:04:12,551 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> > 2016-01-19 20:04:12,552 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> > 2016-01-19 20:04:12,557 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> > 2016-01-19 20:04:12,558 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> > 2016-01-19 20:04:12,559 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for
> > class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms
> after
> > creating 488, Expected: 504
> > 2016-01-19 20:04:12,615 INFO [main]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> > setting permissions to : 504, rwxrwx---
> > 2016-01-19 20:04:12,731 INFO [main]
> > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for
> class
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> > 2016-01-19 20:04:12,956 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period
> > at 10 second(s).
> > 2016-01-19 20:04:13,018 INFO [main]
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> > system started
> > 2016-01-19 20:04:13,026 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> > job_1453244277886_0001 to jobTokenSecretManager
> > 2016-01-19 20:04:13,139 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> > job_1453244277886_0001 because: not enabled;
> > 2016-01-19 20:04:13,154 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> > job_1453244277886_0001 = 343691. Number of splits = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces
> for
> > job job_1453244277886_0001 = 1
> > 2016-01-19 20:04:13,156 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from NEW to INITED
> > 2016-01-19 20:04:13,157 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> > normal, non-uberized, multi-container job job_1453244277886_0001.
> > 2016-01-19 20:04:13,186 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> > 2016-01-19 20:04:13,237 INFO [main]
> > org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> > protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the
> server
> > 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> > 2016-01-19 20:04:13,239 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> > MRClientService at jose-ubuntu/127.0.0.1:56461
> > 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2016-01-19 20:04:13,304 INFO [main]
> org.apache.hadoop.http.HttpRequestLog:
> > Http request log for http.requests.mapreduce is not defined
> > 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> > 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context mapreduce
> > 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Added filter AM_PROXY_FILTER
> > (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> > context static
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /mapreduce/*
> > 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> > adding path spec: /ws/*
> > 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> > Jetty bound to port 44070
> > 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> > 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> >
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> > to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> > 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:44070
> > 2016-01-19 20:04:13,647 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Web app /mapreduce started at 44070
> > 2016-01-19 20:04:13,956 INFO [main]
> org.apache.hadoop.yarn.webapp.WebApps:
> > Registered webapp guice modules
> > 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> JOB_CREATE
> > job_1453244277886_0001
> > 2016-01-19 20:04:13,961 INFO [main]
> org.apache.hadoop.ipc.CallQueueManager:
> > Using callQueue class java.util.concurrent.LinkedBlockingQueue
> > 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> > org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> > 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> > 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > nodeBlacklistingEnabled:true
> > 2016-01-19 20:04:13,987 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > maxTaskFailuresPerNode is 3
> > 2016-01-19 20:04:13,988 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> > blacklistDisablePercent is 33
> > 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> > 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> > Ignoring.
> > 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> > job.xml:an attempt to override final parameter:
> > mapreduce.job.end-notification.max.attempts;  Ignoring.
> > 2016-01-19 20:04:14,062 INFO [main]
> org.apache.hadoop.yarn.client.RMProxy:
> > Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > maxContainerCapability: 2000
> > 2016-01-19 20:04:14,158 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue:
> default
> > 2016-01-19 20:04:14,162 INFO [main]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> > limit on the thread pool size is 500
> > 2016-01-19 20:04:14,164 INFO [main]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > yarn.client.max-nodemanagers-proxies : 500
> > 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from INITED to SETUP
> > 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_SETUP
> > 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> > 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:14,233 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > mapResourceReqt:512
> > 2016-01-19 20:04:14,245 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> > reduceResourceReqt:512
> > 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
> Writer
> > setup for JobId: job_1453244277886_0001, File:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> > HostLocal:0 RackLocal:0
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=1280
> > 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000002 to
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> > file on the remote FS is
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> > 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> > file on the remote FS is
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> > tokens and #1 secret keys for NM use for launching container
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> > containertokens_dob is 1
> > 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle
> > token in serviceData
> > 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> > org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> > Opening proxy : localhost:35711
> > 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_0
> > : 13562
> > 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_0] using containerId:
> > [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> > 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to
> RUNNING
> > 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000002
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_0: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000002 taskAttempt
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> > 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:18,327 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> > node localhost
> > 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:18,329 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_1 to list of failed maps
> > 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000003,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000003 to
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_1
> > : 13562
> > 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_1] using containerId:
> > [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> > 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000003
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_1: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000003 taskAttempt
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> > 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:21,313 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> > node localhost
> > 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:21,314 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_2 to list of failed maps
> > 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000004,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000004 to
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_2
> > : 13562
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_2] using containerId:
> > [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000004
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_2: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000004 taskAttempt
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> > node localhost
> > 2016-01-19 20:04:24,342 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
> host
> > localhost
> > 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW
> to
> > UNASSIGNED
> > 2016-01-19 20:04:24,343 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> > attempt_1453244277886_0001_m_000000_3 to list of failed maps
> > 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=1
> > blacklistRemovals=0
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> > blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> > blacklist for application_1453244277886_0001: blacklistAdditions=0
> > blacklistRemovals=1
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> > containers 1
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> > container Container: [ContainerId:
> container_1453244277886_0001_01_000005,
> > NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> > <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> > service: 127.0.0.1:35711 }, ] to fast fail map
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> > earlierFailedMaps
> > 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> > container container_1453244277886_0001_01_000005 to
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> > /default-rack
> > 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > UNASSIGNED to ASSIGNED
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Shuffle
> > port returned by ContainerManager for
> attempt_1453244277886_0001_m_000000_3
> > : 13562
> > 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> > [attempt_1453244277886_0001_m_000000_3] using containerId:
> > [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> ASSIGNED
> > to RUNNING
> > 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> > ATTEMPT_START task_1453244277886_0001_m_000000
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> getResources()
> > for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> > finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> > completed container container_1453244277886_0001_01_000005
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold not met. completedMapsForReduceSlowstart 1
> > 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> RUNNING
> > to FAIL_CONTAINER_CLEANUP
> > 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> > report from attempt_1453244277886_0001_m_000000_3: Exception from
> > container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >     at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >     at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >     at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >     at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Container exited with a non-zero exit code 1
> >
> > 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> > Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> > container_1453244277886_0001_01_000005 taskAttempt
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> KILLING
> > attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> > 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: TASK_ABORT
> > 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
> delete
> >
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> > 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> > FAIL_TASK_CLEANUP to FAILED
> > 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed
> Tasks: 1
> > 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> > failed. failedMaps:1 failedReduces:0
> > 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> > KILL_WAIT
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> > attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> > UNASSIGNED to KILLED
> > 2016-01-19 20:04:28,383 INFO [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing
> the
> > event EventType: CONTAINER_DEALLOCATE
> > 2016-01-19 20:04:28,383 ERROR [Thread-50]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> > deallocate container for task attemptId
> > attempt_1453244277886_0001_r_000000_0
> > 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> > task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to
> KILLED
> > 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> > 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> > the event EventType: JOB_ABORT
> > 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> > job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly
> so
> > this is the last retry
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> > isAMLastRetry: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> RMCommunicator
> > notified that shouldUnregistered is: true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH
> isAMLastRetry:
> > true
> > 2016-01-19 20:04:28,433 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> > JobHistoryEventHandler notified that forceJobCompletion is true
> > 2016-01-19 20:04:28,434 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> > services
> > 2016-01-19 20:04:28,435 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> > JobHistoryEventHandler. Size of the outstanding queue size is 0
> > 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> > Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> > schedule, headroom=768
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> > start threshold reached. Scheduling reduces.
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> > assigned. Ramping up all remaining reduces:1
> > 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> > Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> > AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> > HostLocal:1 RackLocal:0
> > 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> > done location:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> > 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> > 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> > done:
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> > to
> >
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> > 2016-01-19 20:04:30,071 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> > JobHistoryEventHandler. super.stop()
> > 2016-01-19 20:04:30,078 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> > diagnostics to Task failed task_1453244277886_0001_m_000000
> > Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> > 2016-01-19 20:04:30,080 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url
> is
> > http://localhost:19888/jobhistory/job/job_1453244277886_0001
> > 2016-01-19 20:04:30,094 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> > application to be successfully unregistered.
> > 2016-01-19 20:04:31,099 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> > PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0
> > CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> > RackLocal:0
> > 2016-01-19 20:04:31,104 INFO [Thread-61]
> > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging
> directory
> > hdfs://hdnode01:54310
> > /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> > 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> > Stopping server on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> > 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> > org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> > 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> > org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> > TaskHeartbeatHandler thread interrupted
> >
> >
> > Jps results, i believe that everything is ok, right?:
> > 21267 DataNode
> > 21609 ResourceManager
> > 21974 JobHistoryServer
> > 21735 NodeManager
> > 24546 Jps
> > 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> > 21121 NameNode
> > 22098 QuorumPeerMain
> > 21456 SecondaryNameNode
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
It could be a classpath issue (see
http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
this is the case.

You could drill down to the exact root cause by looking at the
datanode logs (see
http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E)
But I'm not sure we would get another error than what we had...

Check if your application has the correct values for the following variables:
HADOOP_CONF_DIR
HADOOP_COMMON_HOME
HADOOP_HDFS_HOME
HADOOP_MAPRED_HOME
HADOOP_YARN_HOME

I'm afraid I can't help you much more than this myself, sorry...

LLoyd

On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com> wrote:
> Hi guys, thanks for your answers.
>
> Wordcount logs:
>
> 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ cat
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>
>
> Container: container_1453244277886_0001_01_000002 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000003 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000004 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000005 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000001 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 929
> Log Contents:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.ipc.Server).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
> LogType: syslog
> LogLength: 56780
> Log Contents:
> 2016-01-19 20:04:11,329 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> application appattempt_1453244277886_0001_000001
> 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:11,765 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> Service: , Ident:
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> 2016-01-19 20:04:11,801 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2
> for application: 1. Attempt num: 1 is last retry: false
> 2016-01-19 20:04:11,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> newApiCommitter.
> 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:12,464 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> config null
> 2016-01-19 20:04:12,526 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 2016-01-19 20:04:12,548 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.jobhistory.EventType for class
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-01-19 20:04:12,549 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-01-19 20:04:12,550 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-01-19 20:04:12,551 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-01-19 20:04:12,552 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-01-19 20:04:12,557 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-01-19 20:04:12,558 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2016-01-19 20:04:12,559 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
> class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
> creating 488, Expected: 504
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> setting permissions to : 504, rwxrwx---
> 2016-01-19 20:04:12,731 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2016-01-19 20:04:12,956 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> system started
> 2016-01-19 20:04:13,026 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> job_1453244277886_0001 to jobTokenSecretManager
> 2016-01-19 20:04:13,139 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> job_1453244277886_0001 because: not enabled;
> 2016-01-19 20:04:13,154 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> job_1453244277886_0001 = 343691. Number of splits = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
> job job_1453244277886_0001 = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from NEW to INITED
> 2016-01-19 20:04:13,157 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> normal, non-uberized, multi-container job job_1453244277886_0001.
> 2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> 2016-01-19 20:04:13,237 INFO [main]
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> 2016-01-19 20:04:13,239 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> MRClientService at jose-ubuntu/127.0.0.1:56461
> 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
> Http request log for http.requests.mapreduce is not defined
> 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context mapreduce
> 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context static
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /mapreduce/*
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /ws/*
> 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> Jetty bound to port 44070
> 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:44070
> 2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 44070
> 2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
> job_1453244277886_0001
> 2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> nodeBlacklistingEnabled:true
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> maxTaskFailuresPerNode is 3
> 2016-01-19 20:04:13,988 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> blacklistDisablePercent is 33
> 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
> Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: 2000
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
> 2016-01-19 20:04:14,162 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> limit on the thread pool size is 500
> 2016-01-19 20:04:14,164 INFO [main]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from INITED to SETUP
> 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_SETUP
> 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,233 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> mapResourceReqt:512
> 2016-01-19 20:04:14,245 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> reduceResourceReqt:512
> 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
> setup for JobId: job_1453244277886_0001, File:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> HostLocal:0 RackLocal:0
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=1280
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000002 to
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle
> token in serviceData
> 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : localhost:35711
> 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
> : 13562
> 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_0] using containerId:
> [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000002
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_0: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:18,327 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node localhost
> 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:18,329 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_1 to list of failed maps
> 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000003,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000003 to
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
> : 13562
> 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_1] using containerId:
> [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000003
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_1: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:21,313 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> node localhost
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:21,314 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_2 to list of failed maps
> 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000004,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000004 to
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
> : 13562
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_2] using containerId:
> [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000004
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_2: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> node localhost
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host
> localhost
> 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:24,343 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_3 to list of failed maps
> 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=1
> blacklistRemovals=0
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=0
> blacklistRemovals=1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000005,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000005 to
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
> : 13562
> 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_3] using containerId:
> [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000005
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_3: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:1 failedReduces:0
> 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> KILL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to KILLED
> 2016-01-19 20:04:28,383 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
> event EventType: CONTAINER_DEALLOCATE
> 2016-01-19 20:04:28,383 ERROR [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> deallocate container for task attemptId
> attempt_1453244277886_0001_r_000000_0
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
> 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_ABORT
> 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
> this is the last retry
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
> 2016-01-19 20:04:28,434 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> services
> 2016-01-19 20:04:28,435 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> JobHistoryEventHandler. Size of the outstanding queue size is 0
> 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold reached. Scheduling reduces.
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> assigned. Ramping up all remaining reduces:1
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> 2016-01-19 20:04:30,071 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> JobHistoryEventHandler. super.stop()
> 2016-01-19 20:04:30,078 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> diagnostics to Task failed task_1453244277886_0001_m_000000
> Job failed as tasks failed. failedMaps:1 failedReduces:0
>
> 2016-01-19 20:04:30,080 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
> http://localhost:19888/jobhistory/job/job_1453244277886_0001
> 2016-01-19 20:04:30,094 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> application to be successfully unregistered.
> 2016-01-19 20:04:31,099 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
> CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> RackLocal:0
> 2016-01-19 20:04:31,104 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
> hdfs://hdnode01:54310
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> Stopping server on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
>
>
> Jps results, i believe that everything is ok, right?:
> 21267 DataNode
> 21609 ResourceManager
> 21974 JobHistoryServer
> 21735 NodeManager
> 24546 Jps
> 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> 21121 NameNode
> 22098 QuorumPeerMain
> 21456 SecondaryNameNode
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
It could be a classpath issue (see
http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
this is the case.

You could drill down to the exact root cause by looking at the
datanode logs (see
http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E)
But I'm not sure we would get another error than what we had...

Check if your application has the correct values for the following variables:
HADOOP_CONF_DIR
HADOOP_COMMON_HOME
HADOOP_HDFS_HOME
HADOOP_MAPRED_HOME
HADOOP_YARN_HOME

I'm afraid I can't help you much more than this myself, sorry...

LLoyd

On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com> wrote:
> Hi guys, thanks for your answers.
>
> Wordcount logs:
>
> 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ cat
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>
>
> Container: container_1453244277886_0001_01_000002 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000003 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000004 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000005 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000001 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 929
> Log Contents:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.ipc.Server).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
> LogType: syslog
> LogLength: 56780
> Log Contents:
> 2016-01-19 20:04:11,329 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> application appattempt_1453244277886_0001_000001
> 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:11,765 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> Service: , Ident:
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> 2016-01-19 20:04:11,801 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2
> for application: 1. Attempt num: 1 is last retry: false
> 2016-01-19 20:04:11,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> newApiCommitter.
> 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:12,464 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> config null
> 2016-01-19 20:04:12,526 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 2016-01-19 20:04:12,548 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.jobhistory.EventType for class
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-01-19 20:04:12,549 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-01-19 20:04:12,550 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-01-19 20:04:12,551 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-01-19 20:04:12,552 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-01-19 20:04:12,557 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-01-19 20:04:12,558 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2016-01-19 20:04:12,559 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
> class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
> creating 488, Expected: 504
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> setting permissions to : 504, rwxrwx---
> 2016-01-19 20:04:12,731 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2016-01-19 20:04:12,956 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> system started
> 2016-01-19 20:04:13,026 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> job_1453244277886_0001 to jobTokenSecretManager
> 2016-01-19 20:04:13,139 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> job_1453244277886_0001 because: not enabled;
> 2016-01-19 20:04:13,154 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> job_1453244277886_0001 = 343691. Number of splits = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
> job job_1453244277886_0001 = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from NEW to INITED
> 2016-01-19 20:04:13,157 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> normal, non-uberized, multi-container job job_1453244277886_0001.
> 2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> 2016-01-19 20:04:13,237 INFO [main]
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> 2016-01-19 20:04:13,239 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> MRClientService at jose-ubuntu/127.0.0.1:56461
> 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
> Http request log for http.requests.mapreduce is not defined
> 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context mapreduce
> 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context static
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /mapreduce/*
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /ws/*
> 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> Jetty bound to port 44070
> 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:44070
> 2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 44070
> 2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
> job_1453244277886_0001
> 2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> nodeBlacklistingEnabled:true
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> maxTaskFailuresPerNode is 3
> 2016-01-19 20:04:13,988 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> blacklistDisablePercent is 33
> 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
> Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: 2000
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
> 2016-01-19 20:04:14,162 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> limit on the thread pool size is 500
> 2016-01-19 20:04:14,164 INFO [main]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from INITED to SETUP
> 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_SETUP
> 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,233 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> mapResourceReqt:512
> 2016-01-19 20:04:14,245 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> reduceResourceReqt:512
> 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
> setup for JobId: job_1453244277886_0001, File:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> HostLocal:0 RackLocal:0
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=1280
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000002 to
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle
> token in serviceData
> 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : localhost:35711
> 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
> : 13562
> 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_0] using containerId:
> [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000002
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_0: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:18,327 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node localhost
> 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:18,329 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_1 to list of failed maps
> 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000003,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000003 to
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
> : 13562
> 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_1] using containerId:
> [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000003
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_1: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:21,313 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> node localhost
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:21,314 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_2 to list of failed maps
> 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000004,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000004 to
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
> : 13562
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_2] using containerId:
> [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000004
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_2: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> node localhost
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host
> localhost
> 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:24,343 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_3 to list of failed maps
> 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=1
> blacklistRemovals=0
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=0
> blacklistRemovals=1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000005,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000005 to
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
> : 13562
> 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_3] using containerId:
> [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000005
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_3: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:1 failedReduces:0
> 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> KILL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to KILLED
> 2016-01-19 20:04:28,383 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
> event EventType: CONTAINER_DEALLOCATE
> 2016-01-19 20:04:28,383 ERROR [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> deallocate container for task attemptId
> attempt_1453244277886_0001_r_000000_0
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
> 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_ABORT
> 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
> this is the last retry
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
> 2016-01-19 20:04:28,434 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> services
> 2016-01-19 20:04:28,435 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> JobHistoryEventHandler. Size of the outstanding queue size is 0
> 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold reached. Scheduling reduces.
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> assigned. Ramping up all remaining reduces:1
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> 2016-01-19 20:04:30,071 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> JobHistoryEventHandler. super.stop()
> 2016-01-19 20:04:30,078 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> diagnostics to Task failed task_1453244277886_0001_m_000000
> Job failed as tasks failed. failedMaps:1 failedReduces:0
>
> 2016-01-19 20:04:30,080 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
> http://localhost:19888/jobhistory/job/job_1453244277886_0001
> 2016-01-19 20:04:30,094 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> application to be successfully unregistered.
> 2016-01-19 20:04:31,099 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
> CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> RackLocal:0
> 2016-01-19 20:04:31,104 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
> hdfs://hdnode01:54310
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> Stopping server on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
>
>
> Jps results, i believe that everything is ok, right?:
> 21267 DataNode
> 21609 ResourceManager
> 21974 JobHistoryServer
> 21735 NodeManager
> 24546 Jps
> 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> 21121 NameNode
> 22098 QuorumPeerMain
> 21456 SecondaryNameNode
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
It could be a classpath issue (see
http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
this is the case.

You could drill down to the exact root cause by looking at the
datanode logs (see
http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E)
But I'm not sure we would get another error than what we had...

Check if your application has the correct values for the following variables:
HADOOP_CONF_DIR
HADOOP_COMMON_HOME
HADOOP_HDFS_HOME
HADOOP_MAPRED_HOME
HADOOP_YARN_HOME

I'm afraid I can't help you much more than this myself, sorry...

LLoyd

On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com> wrote:
> Hi guys, thanks for your answers.
>
> Wordcount logs:
>
> 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ cat
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>
>
> Container: container_1453244277886_0001_01_000002 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000003 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000004 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000005 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000001 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 929
> Log Contents:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.ipc.Server).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
> LogType: syslog
> LogLength: 56780
> Log Contents:
> 2016-01-19 20:04:11,329 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> application appattempt_1453244277886_0001_000001
> 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:11,765 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> Service: , Ident:
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> 2016-01-19 20:04:11,801 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2
> for application: 1. Attempt num: 1 is last retry: false
> 2016-01-19 20:04:11,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> newApiCommitter.
> 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:12,464 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> config null
> 2016-01-19 20:04:12,526 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 2016-01-19 20:04:12,548 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.jobhistory.EventType for class
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-01-19 20:04:12,549 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-01-19 20:04:12,550 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-01-19 20:04:12,551 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-01-19 20:04:12,552 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-01-19 20:04:12,557 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-01-19 20:04:12,558 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2016-01-19 20:04:12,559 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
> class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
> creating 488, Expected: 504
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> setting permissions to : 504, rwxrwx---
> 2016-01-19 20:04:12,731 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2016-01-19 20:04:12,956 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> system started
> 2016-01-19 20:04:13,026 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> job_1453244277886_0001 to jobTokenSecretManager
> 2016-01-19 20:04:13,139 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> job_1453244277886_0001 because: not enabled;
> 2016-01-19 20:04:13,154 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> job_1453244277886_0001 = 343691. Number of splits = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
> job job_1453244277886_0001 = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from NEW to INITED
> 2016-01-19 20:04:13,157 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> normal, non-uberized, multi-container job job_1453244277886_0001.
> 2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> 2016-01-19 20:04:13,237 INFO [main]
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> 2016-01-19 20:04:13,239 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> MRClientService at jose-ubuntu/127.0.0.1:56461
> 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
> Http request log for http.requests.mapreduce is not defined
> 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context mapreduce
> 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context static
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /mapreduce/*
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /ws/*
> 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> Jetty bound to port 44070
> 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:44070
> 2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 44070
> 2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
> job_1453244277886_0001
> 2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> nodeBlacklistingEnabled:true
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> maxTaskFailuresPerNode is 3
> 2016-01-19 20:04:13,988 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> blacklistDisablePercent is 33
> 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
> Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: 2000
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
> 2016-01-19 20:04:14,162 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> limit on the thread pool size is 500
> 2016-01-19 20:04:14,164 INFO [main]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from INITED to SETUP
> 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_SETUP
> 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,233 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> mapResourceReqt:512
> 2016-01-19 20:04:14,245 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> reduceResourceReqt:512
> 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
> setup for JobId: job_1453244277886_0001, File:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> HostLocal:0 RackLocal:0
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=1280
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000002 to
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle
> token in serviceData
> 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : localhost:35711
> 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
> : 13562
> 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_0] using containerId:
> [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000002
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_0: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:18,327 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node localhost
> 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:18,329 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_1 to list of failed maps
> 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000003,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000003 to
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
> : 13562
> 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_1] using containerId:
> [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000003
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_1: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:21,313 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> node localhost
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:21,314 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_2 to list of failed maps
> 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000004,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000004 to
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
> : 13562
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_2] using containerId:
> [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000004
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_2: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> node localhost
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host
> localhost
> 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:24,343 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_3 to list of failed maps
> 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=1
> blacklistRemovals=0
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=0
> blacklistRemovals=1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000005,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000005 to
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
> : 13562
> 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_3] using containerId:
> [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000005
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_3: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:1 failedReduces:0
> 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> KILL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to KILLED
> 2016-01-19 20:04:28,383 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
> event EventType: CONTAINER_DEALLOCATE
> 2016-01-19 20:04:28,383 ERROR [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> deallocate container for task attemptId
> attempt_1453244277886_0001_r_000000_0
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
> 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_ABORT
> 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
> this is the last retry
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
> 2016-01-19 20:04:28,434 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> services
> 2016-01-19 20:04:28,435 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> JobHistoryEventHandler. Size of the outstanding queue size is 0
> 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold reached. Scheduling reduces.
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> assigned. Ramping up all remaining reduces:1
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> 2016-01-19 20:04:30,071 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> JobHistoryEventHandler. super.stop()
> 2016-01-19 20:04:30,078 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> diagnostics to Task failed task_1453244277886_0001_m_000000
> Job failed as tasks failed. failedMaps:1 failedReduces:0
>
> 2016-01-19 20:04:30,080 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
> http://localhost:19888/jobhistory/job/job_1453244277886_0001
> 2016-01-19 20:04:30,094 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> application to be successfully unregistered.
> 2016-01-19 20:04:31,099 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
> CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> RackLocal:0
> 2016-01-19 20:04:31,104 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
> hdfs://hdnode01:54310
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> Stopping server on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
>
>
> Jps results, i believe that everything is ok, right?:
> 21267 DataNode
> 21609 ResourceManager
> 21974 JobHistoryServer
> 21735 NodeManager
> 24546 Jps
> 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> 21121 NameNode
> 22098 QuorumPeerMain
> 21456 SecondaryNameNode
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
It could be a classpath issue (see
http://stackoverflow.com/a/25090151/4486184), I'm strongly thinking
this is the case.

You could drill down to the exact root cause by looking at the
datanode logs (see
http://mail-archives.apache.org/mod_mbox/hadoop-user/201410.mbox/%3CCAEMetGubzq12LXbLRk6N4ejOoKse9dLEWMW8_WE6aRj=+RQtVw@mail.gmail.com%3E)
But I'm not sure we would get another error than what we had...

Check if your application has the correct values for the following variables:
HADOOP_CONF_DIR
HADOOP_COMMON_HOME
HADOOP_HDFS_HOME
HADOOP_MAPRED_HOME
HADOOP_YARN_HOME

I'm afraid I can't help you much more than this myself, sorry...

LLoyd

On 20 January 2016 at 02:08, José Luis Larroque <la...@gmail.com> wrote:
> Hi guys, thanks for your answers.
>
> Wordcount logs:
>
> 16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ nano
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
> hduser@jose-ubuntu:/usr/local/hadoop$ cat
> /home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
>
>
> Container: container_1453244277886_0001_01_000002 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000003 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000004 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000005 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 45
> Log Contents:
> Error: Could not find or load main class 256
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
>
>
> Container: container_1453244277886_0001_01_000001 on localhost_35711
> ======================================================================
> LogType: stderr
> LogLength: 929
> Log Contents:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.ipc.Server).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
> LogType: stdout
> LogLength: 0
> Log Contents:
>
> LogType: syslog
> LogLength: 56780
> Log Contents:
> 2016-01-19 20:04:11,329 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> application appattempt_1453244277886_0001_000001
> 2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:11,765 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2016-01-19 20:04:11,776 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> Service: , Ident:
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
> 2016-01-19 20:04:11,801 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2
> for application: 1. Attempt num: 1 is last retry: false
> 2016-01-19 20:04:11,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
> newApiCommitter.
> 2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:12,464 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> config null
> 2016-01-19 20:04:12,526 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 2016-01-19 20:04:12,548 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.jobhistory.EventType for class
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-01-19 20:04:12,549 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-01-19 20:04:12,550 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-01-19 20:04:12,551 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-01-19 20:04:12,552 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-01-19 20:04:12,557 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-01-19 20:04:12,558 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2016-01-19 20:04:12,559 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
> class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
> creating 488, Expected: 504
> 2016-01-19 20:04:12,615 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
> setting permissions to : 504, rwxrwx---
> 2016-01-19 20:04:12,731 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2016-01-19 20:04:12,956 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-01-19 20:04:13,018 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> system started
> 2016-01-19 20:04:13,026 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> job_1453244277886_0001 to jobTokenSecretManager
> 2016-01-19 20:04:13,139 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> job_1453244277886_0001 because: not enabled;
> 2016-01-19 20:04:13,154 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> job_1453244277886_0001 = 343691. Number of splits = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
> job job_1453244277886_0001 = 1
> 2016-01-19 20:04:13,156 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from NEW to INITED
> 2016-01-19 20:04:13,157 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> normal, non-uberized, multi-container job job_1453244277886_0001.
> 2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
> 2016-01-19 20:04:13,237 INFO [main]
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2016-01-19 20:04:13,238 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
> org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
> 2016-01-19 20:04:13,239 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> MRClientService at jose-ubuntu/127.0.0.1:56461
> 2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
> Http request log for http.requests.mapreduce is not defined
> 2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context mapreduce
> 2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context static
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /mapreduce/*
> 2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /ws/*
> 2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
> Jetty bound to port 44070
> 2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
> 2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
> jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
> to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
> 2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:44070
> 2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 44070
> 2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
> job_1453244277886_0001
> 2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
> 2016-01-19 20:04:13,966 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> nodeBlacklistingEnabled:true
> 2016-01-19 20:04:13,987 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> maxTaskFailuresPerNode is 3
> 2016-01-19 20:04:13,988 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> blacklistDisablePercent is 33
> 2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
> Ignoring.
> 2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
> Ignoring.
> 2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
> Connecting to ResourceManager at hdnode01/192.168.0.10:8030
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: 2000
> 2016-01-19 20:04:14,158 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
> 2016-01-19 20:04:14,162 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> limit on the thread pool size is 500
> 2016-01-19 20:04:14,164 INFO [main]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from INITED to SETUP
> 2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_SETUP
> 2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from SETUP to RUNNING
> 2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:14,233 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> mapResourceReqt:512
> 2016-01-19 20:04:14,245 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> reduceResourceReqt:512
> 2016-01-19 20:04:14,324 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
> setup for JobId: job_1453244277886_0001, File:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> 2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> HostLocal:0 RackLocal:0
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=1280
> 2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000002 to
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
> 2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle
> token in serviceData
> 2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : localhost:35711
> 2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
> : 13562
> 2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_0] using containerId:
> [container_1453244277886_0001_01_000002 on NM: [localhost:35711]
> 2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=3 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000002
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_0: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000002 taskAttempt
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
> 2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:18,327 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node localhost
> 2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:18,329 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_1 to list of failed maps
> 2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000003,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000003 to
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
> : 13562
> 2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_1] using containerId:
> [container_1453244277886_0001_01_000003 on NM: [localhost:35711]
> 2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000003
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_1: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000003 taskAttempt
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
> 2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:21,313 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> node localhost
> 2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:21,314 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_2 to list of failed maps
> 2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000004,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000004 to
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
> : 13562
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_2] using containerId:
> [container_1453244277886_0001_01_000004 on NM: [localhost:35711]
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000004
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_2: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000004 taskAttempt
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> node localhost
> 2016-01-19 20:04:24,342 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host
> localhost
> 2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2016-01-19 20:04:24,343 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1453244277886_0001_m_000000_3 to list of failed maps
> 2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=1
> blacklistRemovals=0
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
> blacklisting set to true. Known: 1, Blacklisted: 1, 100%
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
> blacklist for application_1453244277886_0001: blacklistAdditions=0
> blacklistRemovals=1
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1453244277886_0001_01_000005,
> NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
> <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 127.0.0.1:35711 }, ] to fast fail map
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1453244277886_0001_01_000005 to
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
> /default-rack
> 2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
> : 13562
> 2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1453244277886_0001_m_000000_3] using containerId:
> [container_1453244277886_0001_01_000005 on NM: [localhost:35711]
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from ASSIGNED
> to RUNNING
> 2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
> ATTEMPT_START task_1453244277886_0001_m_000000
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1453244277886_0001: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1453244277886_0001_01_000005
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold not met. completedMapsForReduceSlowstart 1
> 2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1453244277886_0001_m_000000_3: Exception from
> container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>     at org.apache.hadoop.util.Shell.run(Shell.java:418)
>     at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>     at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>     at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
> 2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1453244277886_0001_01_000005 taskAttempt
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete
> hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
> 2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
> 2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:1 failedReduces:0
> 2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
> KILL_WAIT
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to KILLED
> 2016-01-19 20:04:28,383 INFO [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
> event EventType: CONTAINER_DEALLOCATE
> 2016-01-19 20:04:28,383 ERROR [Thread-50]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
> deallocate container for task attemptId
> attempt_1453244277886_0001_r_000000_0
> 2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
> 2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
> 2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_ABORT
> 2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
> this is the last retry
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2016-01-19 20:04:28,433 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
> 2016-01-19 20:04:28,434 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> services
> 2016-01-19 20:04:28,435 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> JobHistoryEventHandler. Size of the outstanding queue size is 0
> 2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,362 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
> schedule, headroom=768
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
> start threshold reached. Scheduling reduces.
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
> assigned. Ramping up all remaining reduces:1
> 2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
> AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
> HostLocal:1 RackLocal:0
> 2016-01-19 20:04:29,544 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> 2016-01-19 20:04:29,598 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,801 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> 2016-01-19 20:04:29,907 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
> 2016-01-19 20:04:30,008 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
> 2016-01-19 20:04:30,070 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
> to
> hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
> 2016-01-19 20:04:30,071 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> JobHistoryEventHandler. super.stop()
> 2016-01-19 20:04:30,078 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> diagnostics to Task failed task_1453244277886_0001_m_000000
> Job failed as tasks failed. failedMaps:1 failedReduces:0
>
> 2016-01-19 20:04:30,080 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
> http://localhost:19888/jobhistory/job/job_1453244277886_0001
> 2016-01-19 20:04:30,094 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> application to be successfully unregistered.
> 2016-01-19 20:04:31,099 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
> CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
> RackLocal:0
> 2016-01-19 20:04:31,104 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
> hdfs://hdnode01:54310
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
> 2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> Stopping server on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
> 2016-01-19 20:04:31,135 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
>
>
> Jps results, i believe that everything is ok, right?:
> 21267 DataNode
> 21609 ResourceManager
> 21974 JobHistoryServer
> 21735 NodeManager
> 24546 Jps
> 16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
> 21121 NameNode
> 22098 QuorumPeerMain
> 21456 SecondaryNameNode
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi guys, thanks for your answers.

Wordcount logs:

16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ cat
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount


Container: container_1453244277886_0001_01_000002 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000003 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000004 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000005 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000001 on localhost_35711
======================================================================
LogType: stderr
LogLength: 929
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.

LogType: stdout
LogLength: 0
Log Contents:

LogType: syslog
LogLength: 56780
Log Contents:
2016-01-19 20:04:11,329 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
application appattempt_1453244277886_0001_000001
2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:11,765 WARN [main]
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
Service: , Ident:
(org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
2016-01-19 20:04:11,801 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts:
2 for application: 1. Attempt num: 1 is last retry: false
2016-01-19 20:04:11,806 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
newApiCommitter.
2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:12,464 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
config null
2016-01-19 20:04:12,526 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-01-19 20:04:12,548 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.jobhistory.EventType for class
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-01-19 20:04:12,549 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-01-19 20:04:12,550 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-01-19 20:04:12,551 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-01-19 20:04:12,552 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-01-19 20:04:12,557 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-01-19 20:04:12,558 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-01-19 20:04:12,559 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
creating 488, Expected: 504
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
setting permissions to : 504, rwxrwx---
2016-01-19 20:04:12,731 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2016-01-19 20:04:12,956 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
system started
2016-01-19 20:04:13,026 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
job_1453244277886_0001 to jobTokenSecretManager
2016-01-19 20:04:13,139 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
job_1453244277886_0001 because: not enabled;
2016-01-19 20:04:13,154 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
job_1453244277886_0001 = 343691. Number of splits = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
job job_1453244277886_0001 = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from NEW to INITED
2016-01-19 20:04:13,157 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
normal, non-uberized, multi-container job job_1453244277886_0001.
2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
2016-01-19 20:04:13,237 INFO [main]
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2016-01-19 20:04:13,238 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
2016-01-19 20:04:13,239 INFO [main]
org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
MRClientService at jose-ubuntu/127.0.0.1:56461
2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.mapreduce is not defined
2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context mapreduce
2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context static
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /mapreduce/*
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /ws/*
2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
Jetty bound to port 44070
2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:44070
2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Web app /mapreduce started at 44070
2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Registered webapp guice modules
2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
job_1453244277886_0001
2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
2016-01-19 20:04:13,966 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
nodeBlacklistingEnabled:true
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
maxTaskFailuresPerNode is 3
2016-01-19 20:04:13,988 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
blacklistDisablePercent is 33
2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
Connecting to ResourceManager at hdnode01/192.168.0.10:8030
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
maxContainerCapability: 2000
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
2016-01-19 20:04:14,162 INFO [main]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
limit on the thread pool size is 500
2016-01-19 20:04:14,164 INFO [main]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from INITED to SETUP
2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_SETUP
2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from SETUP to RUNNING
2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,233 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
mapResourceReqt:512
2016-01-19 20:04:14,245 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
reduceResourceReqt:512
2016-01-19 20:04:14,324 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
setup for JobId: job_1453244277886_0001, File:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
HostLocal:0 RackLocal:0
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=1280
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000002 to
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
file on the remote FS is
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
file on the remote FS is
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
tokens and #1 secret keys for NM use for launching container
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
containertokens_dob is 1
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
shuffle token in serviceData
2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
Opening proxy : localhost:35711
2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
: 13562
2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_0] using containerId:
[container_1453244277886_0001_01_000002 on NM: [localhost:35711]
2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000002
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_0: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:18,327 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
node localhost
2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:18,329 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_1 to list of failed maps
2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000003,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000003 to
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
: 13562
2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_1] using containerId:
[container_1453244277886_0001_01_000003 on NM: [localhost:35711]
2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000003
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_1: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:21,313 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
node localhost
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:21,314 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_2 to list of failed maps
2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000004,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000004 to
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
: 13562
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_2] using containerId:
[container_1453244277886_0001_01_000004 on NM: [localhost:35711]
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000004
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_2: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
node localhost
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
host localhost
2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:24,343 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_3 to list of failed maps
2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=1
blacklistRemovals=0
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=0
blacklistRemovals=1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000005,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000005 to
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
: 13562
2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_3] using containerId:
[container_1453244277886_0001_01_000005 on NM: [localhost:35711]
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000005
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_3: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
failed. failedMaps:1 failedReduces:0
2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
KILL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
UNASSIGNED to KILLED
2016-01-19 20:04:28,383 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
event EventType: CONTAINER_DEALLOCATE
2016-01-19 20:04:28,383 ERROR [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
deallocate container for task attemptId
attempt_1453244277886_0001_r_000000_0
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_ABORT
2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
this is the last retry
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
isAMLastRetry: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
notified that shouldUnregistered is: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
JobHistoryEventHandler notified that forceJobCompletion is true
2016-01-19 20:04:28,434 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
services
2016-01-19 20:04:28,435 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
JobHistoryEventHandler. Size of the outstanding queue size is 0
2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,362 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold reached. Scheduling reduces.
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
assigned. Ramping up all remaining reduces:1
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,544 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,598 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,801 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,907 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
2016-01-19 20:04:30,008 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
2016-01-19 20:04:30,070 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
2016-01-19 20:04:30,071 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
JobHistoryEventHandler. super.stop()
2016-01-19 20:04:30,078 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
diagnostics to Task failed task_1453244277886_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2016-01-19 20:04:30,080 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
http://localhost:19888/jobhistory/job/job_1453244277886_0001
2016-01-19 20:04:30,094 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
application to be successfully unregistered.
2016-01-19 20:04:31,099 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
RackLocal:0
2016-01-19 20:04:31,104 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
hdfs://hdnode01:54310
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
Stopping server on 45584
2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
2016-01-19 20:04:31,135 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
TaskHeartbeatHandler thread interrupted


Jps results, i believe that everything is ok, right?:
21267 DataNode
21609 ResourceManager
21974 JobHistoryServer
21735 NodeManager
24546 Jps
16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
21121 NameNode
22098 QuorumPeerMain
21456 SecondaryNameNode

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi guys, thanks for your answers.

Wordcount logs:

16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ cat
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount


Container: container_1453244277886_0001_01_000002 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000003 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000004 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000005 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000001 on localhost_35711
======================================================================
LogType: stderr
LogLength: 929
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.

LogType: stdout
LogLength: 0
Log Contents:

LogType: syslog
LogLength: 56780
Log Contents:
2016-01-19 20:04:11,329 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
application appattempt_1453244277886_0001_000001
2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:11,765 WARN [main]
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
Service: , Ident:
(org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
2016-01-19 20:04:11,801 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts:
2 for application: 1. Attempt num: 1 is last retry: false
2016-01-19 20:04:11,806 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
newApiCommitter.
2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:12,464 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
config null
2016-01-19 20:04:12,526 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-01-19 20:04:12,548 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.jobhistory.EventType for class
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-01-19 20:04:12,549 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-01-19 20:04:12,550 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-01-19 20:04:12,551 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-01-19 20:04:12,552 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-01-19 20:04:12,557 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-01-19 20:04:12,558 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-01-19 20:04:12,559 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
creating 488, Expected: 504
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
setting permissions to : 504, rwxrwx---
2016-01-19 20:04:12,731 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2016-01-19 20:04:12,956 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
system started
2016-01-19 20:04:13,026 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
job_1453244277886_0001 to jobTokenSecretManager
2016-01-19 20:04:13,139 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
job_1453244277886_0001 because: not enabled;
2016-01-19 20:04:13,154 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
job_1453244277886_0001 = 343691. Number of splits = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
job job_1453244277886_0001 = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from NEW to INITED
2016-01-19 20:04:13,157 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
normal, non-uberized, multi-container job job_1453244277886_0001.
2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
2016-01-19 20:04:13,237 INFO [main]
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2016-01-19 20:04:13,238 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
2016-01-19 20:04:13,239 INFO [main]
org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
MRClientService at jose-ubuntu/127.0.0.1:56461
2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.mapreduce is not defined
2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context mapreduce
2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context static
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /mapreduce/*
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /ws/*
2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
Jetty bound to port 44070
2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:44070
2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Web app /mapreduce started at 44070
2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Registered webapp guice modules
2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
job_1453244277886_0001
2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
2016-01-19 20:04:13,966 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
nodeBlacklistingEnabled:true
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
maxTaskFailuresPerNode is 3
2016-01-19 20:04:13,988 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
blacklistDisablePercent is 33
2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
Connecting to ResourceManager at hdnode01/192.168.0.10:8030
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
maxContainerCapability: 2000
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
2016-01-19 20:04:14,162 INFO [main]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
limit on the thread pool size is 500
2016-01-19 20:04:14,164 INFO [main]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from INITED to SETUP
2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_SETUP
2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from SETUP to RUNNING
2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,233 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
mapResourceReqt:512
2016-01-19 20:04:14,245 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
reduceResourceReqt:512
2016-01-19 20:04:14,324 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
setup for JobId: job_1453244277886_0001, File:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
HostLocal:0 RackLocal:0
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=1280
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000002 to
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
file on the remote FS is
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
file on the remote FS is
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
tokens and #1 secret keys for NM use for launching container
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
containertokens_dob is 1
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
shuffle token in serviceData
2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
Opening proxy : localhost:35711
2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
: 13562
2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_0] using containerId:
[container_1453244277886_0001_01_000002 on NM: [localhost:35711]
2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000002
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_0: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:18,327 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
node localhost
2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:18,329 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_1 to list of failed maps
2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000003,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000003 to
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
: 13562
2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_1] using containerId:
[container_1453244277886_0001_01_000003 on NM: [localhost:35711]
2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000003
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_1: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:21,313 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
node localhost
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:21,314 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_2 to list of failed maps
2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000004,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000004 to
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
: 13562
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_2] using containerId:
[container_1453244277886_0001_01_000004 on NM: [localhost:35711]
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000004
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_2: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
node localhost
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
host localhost
2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:24,343 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_3 to list of failed maps
2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=1
blacklistRemovals=0
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=0
blacklistRemovals=1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000005,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000005 to
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
: 13562
2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_3] using containerId:
[container_1453244277886_0001_01_000005 on NM: [localhost:35711]
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000005
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_3: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
failed. failedMaps:1 failedReduces:0
2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
KILL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
UNASSIGNED to KILLED
2016-01-19 20:04:28,383 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
event EventType: CONTAINER_DEALLOCATE
2016-01-19 20:04:28,383 ERROR [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
deallocate container for task attemptId
attempt_1453244277886_0001_r_000000_0
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_ABORT
2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
this is the last retry
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
isAMLastRetry: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
notified that shouldUnregistered is: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
JobHistoryEventHandler notified that forceJobCompletion is true
2016-01-19 20:04:28,434 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
services
2016-01-19 20:04:28,435 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
JobHistoryEventHandler. Size of the outstanding queue size is 0
2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,362 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold reached. Scheduling reduces.
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
assigned. Ramping up all remaining reduces:1
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,544 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,598 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,801 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,907 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
2016-01-19 20:04:30,008 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
2016-01-19 20:04:30,070 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
2016-01-19 20:04:30,071 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
JobHistoryEventHandler. super.stop()
2016-01-19 20:04:30,078 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
diagnostics to Task failed task_1453244277886_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2016-01-19 20:04:30,080 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
http://localhost:19888/jobhistory/job/job_1453244277886_0001
2016-01-19 20:04:30,094 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
application to be successfully unregistered.
2016-01-19 20:04:31,099 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
RackLocal:0
2016-01-19 20:04:31,104 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
hdfs://hdnode01:54310
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
Stopping server on 45584
2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
2016-01-19 20:04:31,135 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
TaskHeartbeatHandler thread interrupted


Jps results, i believe that everything is ok, right?:
21267 DataNode
21609 ResourceManager
21974 JobHistoryServer
21735 NodeManager
24546 Jps
16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
21121 NameNode
22098 QuorumPeerMain
21456 SecondaryNameNode

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi guys, thanks for your answers.

Wordcount logs:

16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ cat
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount


Container: container_1453244277886_0001_01_000002 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000003 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000004 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000005 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000001 on localhost_35711
======================================================================
LogType: stderr
LogLength: 929
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.

LogType: stdout
LogLength: 0
Log Contents:

LogType: syslog
LogLength: 56780
Log Contents:
2016-01-19 20:04:11,329 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
application appattempt_1453244277886_0001_000001
2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:11,765 WARN [main]
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
Service: , Ident:
(org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
2016-01-19 20:04:11,801 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts:
2 for application: 1. Attempt num: 1 is last retry: false
2016-01-19 20:04:11,806 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
newApiCommitter.
2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:12,464 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
config null
2016-01-19 20:04:12,526 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-01-19 20:04:12,548 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.jobhistory.EventType for class
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-01-19 20:04:12,549 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-01-19 20:04:12,550 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-01-19 20:04:12,551 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-01-19 20:04:12,552 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-01-19 20:04:12,557 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-01-19 20:04:12,558 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-01-19 20:04:12,559 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
creating 488, Expected: 504
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
setting permissions to : 504, rwxrwx---
2016-01-19 20:04:12,731 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2016-01-19 20:04:12,956 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
system started
2016-01-19 20:04:13,026 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
job_1453244277886_0001 to jobTokenSecretManager
2016-01-19 20:04:13,139 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
job_1453244277886_0001 because: not enabled;
2016-01-19 20:04:13,154 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
job_1453244277886_0001 = 343691. Number of splits = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
job job_1453244277886_0001 = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from NEW to INITED
2016-01-19 20:04:13,157 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
normal, non-uberized, multi-container job job_1453244277886_0001.
2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
2016-01-19 20:04:13,237 INFO [main]
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2016-01-19 20:04:13,238 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
2016-01-19 20:04:13,239 INFO [main]
org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
MRClientService at jose-ubuntu/127.0.0.1:56461
2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.mapreduce is not defined
2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context mapreduce
2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context static
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /mapreduce/*
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /ws/*
2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
Jetty bound to port 44070
2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:44070
2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Web app /mapreduce started at 44070
2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Registered webapp guice modules
2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
job_1453244277886_0001
2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
2016-01-19 20:04:13,966 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
nodeBlacklistingEnabled:true
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
maxTaskFailuresPerNode is 3
2016-01-19 20:04:13,988 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
blacklistDisablePercent is 33
2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
Connecting to ResourceManager at hdnode01/192.168.0.10:8030
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
maxContainerCapability: 2000
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
2016-01-19 20:04:14,162 INFO [main]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
limit on the thread pool size is 500
2016-01-19 20:04:14,164 INFO [main]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from INITED to SETUP
2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_SETUP
2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from SETUP to RUNNING
2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,233 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
mapResourceReqt:512
2016-01-19 20:04:14,245 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
reduceResourceReqt:512
2016-01-19 20:04:14,324 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
setup for JobId: job_1453244277886_0001, File:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
HostLocal:0 RackLocal:0
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=1280
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000002 to
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
file on the remote FS is
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
file on the remote FS is
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
tokens and #1 secret keys for NM use for launching container
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
containertokens_dob is 1
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
shuffle token in serviceData
2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
Opening proxy : localhost:35711
2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
: 13562
2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_0] using containerId:
[container_1453244277886_0001_01_000002 on NM: [localhost:35711]
2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000002
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_0: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:18,327 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
node localhost
2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:18,329 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_1 to list of failed maps
2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000003,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000003 to
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
: 13562
2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_1] using containerId:
[container_1453244277886_0001_01_000003 on NM: [localhost:35711]
2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000003
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_1: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:21,313 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
node localhost
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:21,314 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_2 to list of failed maps
2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000004,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000004 to
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
: 13562
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_2] using containerId:
[container_1453244277886_0001_01_000004 on NM: [localhost:35711]
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000004
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_2: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
node localhost
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
host localhost
2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:24,343 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_3 to list of failed maps
2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=1
blacklistRemovals=0
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=0
blacklistRemovals=1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000005,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000005 to
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
: 13562
2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_3] using containerId:
[container_1453244277886_0001_01_000005 on NM: [localhost:35711]
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000005
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_3: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
failed. failedMaps:1 failedReduces:0
2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
KILL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
UNASSIGNED to KILLED
2016-01-19 20:04:28,383 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
event EventType: CONTAINER_DEALLOCATE
2016-01-19 20:04:28,383 ERROR [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
deallocate container for task attemptId
attempt_1453244277886_0001_r_000000_0
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_ABORT
2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
this is the last retry
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
isAMLastRetry: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
notified that shouldUnregistered is: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
JobHistoryEventHandler notified that forceJobCompletion is true
2016-01-19 20:04:28,434 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
services
2016-01-19 20:04:28,435 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
JobHistoryEventHandler. Size of the outstanding queue size is 0
2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,362 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold reached. Scheduling reduces.
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
assigned. Ramping up all remaining reduces:1
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,544 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,598 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,801 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,907 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
2016-01-19 20:04:30,008 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
2016-01-19 20:04:30,070 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
2016-01-19 20:04:30,071 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
JobHistoryEventHandler. super.stop()
2016-01-19 20:04:30,078 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
diagnostics to Task failed task_1453244277886_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2016-01-19 20:04:30,080 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
http://localhost:19888/jobhistory/job/job_1453244277886_0001
2016-01-19 20:04:30,094 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
application to be successfully unregistered.
2016-01-19 20:04:31,099 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
RackLocal:0
2016-01-19 20:04:31,104 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
hdfs://hdnode01:54310
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
Stopping server on 45584
2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
2016-01-19 20:04:31,135 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
TaskHeartbeatHandler thread interrupted


Jps results, i believe that everything is ok, right?:
21267 DataNode
21609 ResourceManager
21974 JobHistoryServer
21735 NodeManager
24546 Jps
16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
21121 NameNode
22098 QuorumPeerMain
21456 SecondaryNameNode

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Hi guys, thanks for your answers.

Wordcount logs:

16/01/19 21:58:32 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/19 21:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ nano
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount
hduser@jose-ubuntu:/usr/local/hadoop$ cat
/home/hduser/Desktop/Tesina/casos_de_prueba/resultados/resultado_cluster_modo_yarn_wordcount


Container: container_1453244277886_0001_01_000002 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000003 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000004 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000005 on localhost_35711
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

LogType: stdout
LogLength: 0
Log Contents:



Container: container_1453244277886_0001_01_000001 on localhost_35711
======================================================================
LogType: stderr
LogLength: 929
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.

LogType: stdout
LogLength: 0
Log Contents:

LogType: syslog
LogLength: 56780
Log Contents:
2016-01-19 20:04:11,329 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
application appattempt_1453244277886_0001_000001
2016-01-19 20:04:11,657 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,674 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:11,765 WARN [main]
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-01-19 20:04:11,776 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
Service: , Ident:
(org.apache.hadoop.yarn.security.AMRMTokenIdentifier@73e8f4b9)
2016-01-19 20:04:11,801 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts:
2 for application: 1. Attempt num: 1 is last retry: false
2016-01-19 20:04:11,806 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
newApiCommitter.
2016-01-19 20:04:11,934 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:11,939 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:11,948 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:11,953 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:12,464 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
config null
2016-01-19 20:04:12,526 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-01-19 20:04:12,548 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.jobhistory.EventType for class
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-01-19 20:04:12,549 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-01-19 20:04:12,550 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-01-19 20:04:12,551 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-01-19 20:04:12,552 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-01-19 20:04:12,557 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-01-19 20:04:12,558 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-01-19 20:04:12,559 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after
creating 488, Expected: 504
2016-01-19 20:04:12,615 INFO [main]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly
setting permissions to : 504, rwxrwx---
2016-01-19 20:04:12,731 INFO [main]
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2016-01-19 20:04:12,956 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2016-01-19 20:04:13,018 INFO [main]
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
system started
2016-01-19 20:04:13,026 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
job_1453244277886_0001 to jobTokenSecretManager
2016-01-19 20:04:13,139 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
job_1453244277886_0001 because: not enabled;
2016-01-19 20:04:13,154 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
job_1453244277886_0001 = 343691. Number of splits = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
job job_1453244277886_0001 = 1
2016-01-19 20:04:13,156 INFO [main]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from NEW to INITED
2016-01-19 20:04:13,157 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
normal, non-uberized, multi-container job job_1453244277886_0001.
2016-01-19 20:04:13,186 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,195 INFO [Socket Reader #1 for port 56461]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56461
2016-01-19 20:04:13,237 INFO [main]
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2016-01-19 20:04:13,238 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,238 INFO [IPC Server listener on 56461]
org.apache.hadoop.ipc.Server: IPC Server listener on 56461: starting
2016-01-19 20:04:13,239 INFO [main]
org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
MRClientService at jose-ubuntu/127.0.0.1:56461
2016-01-19 20:04:13,300 INFO [main] org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2016-01-19 20:04:13,304 INFO [main] org.apache.hadoop.http.HttpRequestLog:
Http request log for http.requests.mapreduce is not defined
2016-01-19 20:04:13,315 INFO [main] org.apache.hadoop.http.HttpServer2:
Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-19 20:04:13,320 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context mapreduce
2016-01-19 20:04:13,321 INFO [main] org.apache.hadoop.http.HttpServer2:
Added filter AM_PROXY_FILTER
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
context static
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /mapreduce/*
2016-01-19 20:04:13,324 INFO [main] org.apache.hadoop.http.HttpServer2:
adding path spec: /ws/*
2016-01-19 20:04:13,335 INFO [main] org.apache.hadoop.http.HttpServer2:
Jetty bound to port 44070
2016-01-19 20:04:13,335 INFO [main] org.mortbay.log: jetty-6.1.26
2016-01-19 20:04:13,370 INFO [main] org.mortbay.log: Extract
jar:file:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar!/webapps/mapreduce
to /tmp/Jetty_0_0_0_0_44070_mapreduce____rdpvio/webapp
2016-01-19 20:04:13,647 INFO [main] org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:44070
2016-01-19 20:04:13,647 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Web app /mapreduce started at 44070
2016-01-19 20:04:13,956 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
Registered webapp guice modules
2016-01-19 20:04:13,960 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE
job_1453244277886_0001
2016-01-19 20:04:13,961 INFO [main] org.apache.hadoop.ipc.CallQueueManager:
Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-01-19 20:04:13,961 INFO [Socket Reader #1 for port 45584]
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45584
2016-01-19 20:04:13,966 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-01-19 20:04:13,966 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: IPC Server listener on 45584: starting
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
nodeBlacklistingEnabled:true
2016-01-19 20:04:13,987 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
maxTaskFailuresPerNode is 3
2016-01-19 20:04:13,988 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
blacklistDisablePercent is 33
2016-01-19 20:04:14,052 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.datanode.data.dir;
Ignoring.
2016-01-19 20:04:14,054 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2016-01-19 20:04:14,057 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter: dfs.namenode.name.dir;
Ignoring.
2016-01-19 20:04:14,059 WARN [main] org.apache.hadoop.conf.Configuration:
job.xml:an attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;  Ignoring.
2016-01-19 20:04:14,062 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
Connecting to ResourceManager at hdnode01/192.168.0.10:8030
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
maxContainerCapability: 2000
2016-01-19 20:04:14,158 INFO [main]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: default
2016-01-19 20:04:14,162 INFO [main]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
limit on the thread pool size is 500
2016-01-19 20:04:14,164 INFO [main]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
2016-01-19 20:04:14,172 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from INITED to SETUP
2016-01-19 20:04:14,174 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_SETUP
2016-01-19 20:04:14,210 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from SETUP to RUNNING
2016-01-19 20:04:14,227 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,230 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,232 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:14,233 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
mapResourceReqt:512
2016-01-19 20:04:14,245 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
reduceResourceReqt:512
2016-01-19 20:04:14,324 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
setup for JobId: job_1453244277886_0001, File:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
2016-01-19 20:04:15,162 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
HostLocal:0 RackLocal:0
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:1280, vCores:0> knownNMs=1
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=1280
2016-01-19 20:04:15,217 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,240 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:16,241 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000002 to
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:16,243 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:16,291 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:16,316 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
file on the remote FS is
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.jar
2016-01-19 20:04:16,322 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
file on the remote FS is
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job.xml
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
tokens and #1 secret keys for NM use for launching container
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
containertokens_dob is 1
2016-01-19 20:04:16,325 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
shuffle token in serviceData
2016-01-19 20:04:16,350 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:16,354 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,356 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:16,357 INFO [ContainerLauncher #0]
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
Opening proxy : localhost:35711
2016-01-19 20:04:16,411 INFO [ContainerLauncher #0]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_0
: 13562
2016-01-19 20:04:16,413 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_0] using containerId:
[container_1453244277886_0001_01_000002 on NM: [localhost:35711]
2016-01-19 20:04:16,418 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:16,419 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
2016-01-19 20:04:17,251 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=3 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000002
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:18,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:18,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:18,280 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_0: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000002 taskAttempt
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,281 INFO [ContainerLauncher #1]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,299 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:18,300 INFO [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:18,313 WARN [CommitterEvent Processor #1]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_0
2016-01-19 20:04:18,317 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_0 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:18,326 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:18,327 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
node localhost
2016-01-19 20:04:18,329 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:18,329 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_1 to list of failed maps
2016-01-19 20:04:19,270 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:19,277 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:19,278 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:20,285 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000003,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:20,286 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000003 to
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:20,287 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:20,287 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:20,289 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,292 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:20,301 INFO [ContainerLauncher #2]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_1
: 13562
2016-01-19 20:04:20,302 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_1] using containerId:
[container_1453244277886_0001_01_000003 on NM: [localhost:35711]
2016-01-19 20:04:20,303 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:20,304 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:21,295 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000003
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:21,296 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:21,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:21,297 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:21,298 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_1: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:21,300 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000003 taskAttempt
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,301 INFO [ContainerLauncher #3]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,307 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:21,308 INFO [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:21,312 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_1
2016-01-19 20:04:21,312 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_1 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:21,313 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
node localhost
2016-01-19 20:04:21,313 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:21,314 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_2 to list of failed maps
2016-01-19 20:04:22,297 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:22,304 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:22,305 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:23,316 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000004,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:23,317 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000004 to
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:23,318 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:23,318 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:23,320 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,323 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:23,334 INFO [ContainerLauncher #4]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_2
: 13562
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_2] using containerId:
[container_1453244277886_0001_01_000004 on NM: [localhost:35711]
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:23,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:24,326 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000004
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:24,327 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:24,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:24,328 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_2: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000004 taskAttempt
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,331 INFO [ContainerLauncher #5]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,335 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:24,336 INFO [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:24,340 WARN [CommitterEvent Processor #3]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_2
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_2 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:24,341 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
node localhost
2016-01-19 20:04:24,342 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
host localhost
2016-01-19 20:04:24,342 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from NEW to
UNASSIGNED
2016-01-19 20:04:24,343 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
attempt_1453244277886_0001_m_000000_3 to list of failed maps
2016-01-19 20:04:25,328 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:25,336 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=1
blacklistRemovals=0
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore
blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:25,337 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the
blacklist for application_1453244277886_0001: blacklistAdditions=0
blacklistRemovals=1
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:26,342 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,351 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
container Container: [ContainerId: container_1453244277886_0001_01_000005,
NodeId: localhost:35711, NodeHttpAddress: localhost:8042, Resource:
<memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
service: 127.0.0.1:35711 }, ] to fast fail map
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
earlierFailedMaps
2016-01-19 20:04:27,352 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1453244277886_0001_01_000005 to
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:27,353 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved localhost to
/default-rack
2016-01-19 20:04:27,353 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:27,354 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
UNASSIGNED to ASSIGNED
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,355 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Launching attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:27,365 INFO [ContainerLauncher #6]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
port returned by ContainerManager for attempt_1453244277886_0001_m_000000_3
: 13562
2016-01-19 20:04:27,365 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1453244277886_0001_m_000000_3] using containerId:
[container_1453244277886_0001_01_000005 on NM: [localhost:35711]
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2016-01-19 20:04:27,366 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
ATTEMPT_START task_1453244277886_0001_m_000000
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1453244277886_0001: ask=1 release= 0 newContainers=0
finishedContainers=1 resourcelimit=<memory:768, vCores:-1> knownNMs=1
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1453244277886_0001_01_000005
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:28,361 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold not met. completedMapsForReduceSlowstart 1
2016-01-19 20:04:28,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from RUNNING
to FAIL_CONTAINER_CLEANUP
2016-01-19 20:04:28,362 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1453244277886_0001_m_000000_3: Exception from
container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

2016-01-19 20:04:28,364 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
container_1453244277886_0001_01_000005 taskAttempt
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,365 INFO [ContainerLauncher #7]
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,373 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2016-01-19 20:04:28,374 INFO [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: TASK_ABORT
2016-01-19 20:04:28,377 WARN [CommitterEvent Processor #4]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not
delete
hdfs://hdnode01:54310/user/hduser/output/wordcount/_temporary/1/_temporary/attempt_1453244277886_0001_m_000000_3
2016-01-19 20:04:28,378 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_m_000000_3 TaskAttempt Transitioned from
FAIL_TASK_CLEANUP to FAILED
2016-01-19 20:04:28,380 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_m_000000 Task Transitioned from RUNNING to FAILED
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2016-01-19 20:04:28,381 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
failed. failedMaps:1 failedReduces:0
2016-01-19 20:04:28,382 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from RUNNING to FAIL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from SCHEDULED to
KILL_WAIT
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1453244277886_0001_r_000000_0 TaskAttempt Transitioned from
UNASSIGNED to KILLED
2016-01-19 20:04:28,383 INFO [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the
event EventType: CONTAINER_DEALLOCATE
2016-01-19 20:04:28,383 ERROR [Thread-50]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not
deallocate container for task attemptId
attempt_1453244277886_0001_r_000000_0
2016-01-19 20:04:28,383 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1453244277886_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
2016-01-19 20:04:28,384 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
2016-01-19 20:04:28,390 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_ABORT
2016-01-19 20:04:28,432 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1453244277886_0001Job Transitioned from FAIL_ABORT to FAILED
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
this is the last retry
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
isAMLastRetry: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
notified that shouldUnregistered is: true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
true
2016-01-19 20:04:28,433 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
JobHistoryEventHandler notified that forceJobCompletion is true
2016-01-19 20:04:28,434 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
services
2016-01-19 20:04:28,435 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
JobHistoryEventHandler. Size of the outstanding queue size is 0
2016-01-19 20:04:29,362 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,362 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1.jhist
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating
schedule, headroom=768
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow
start threshold reached. Scheduling reduces.
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps
assigned. Ramping up all remaining reduces:1
2016-01-19 20:04:29,366 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0
AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0
HostLocal:1 RackLocal:0
2016-01-19 20:04:29,544 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
2016-01-19 20:04:29,598 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001/job_1453244277886_0001_1_conf.xml
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,801 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
done location:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
2016-01-19 20:04:29,907 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001.summary
2016-01-19 20:04:30,008 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001_conf.xml
2016-01-19 20:04:30,070 INFO [eventHandlingThread]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
done:
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist_tmp
to
hdfs://hdnode01:54310/tmp/hadoop-yarn/staging/history/done_intermediate/hduser/job_1453244277886_0001-1453244648033-hduser-word+count-1453244668381-0-0-FAILED-default-1453244654166.jhist
2016-01-19 20:04:30,071 INFO [Thread-61]
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
JobHistoryEventHandler. super.stop()
2016-01-19 20:04:30,078 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
diagnostics to Task failed task_1453244277886_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2016-01-19 20:04:30,080 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
http://localhost:19888/jobhistory/job/job_1453244277886_0001
2016-01-19 20:04:30,094 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
application to be successfully unregistered.
2016-01-19 20:04:31,099 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0
CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:1
RackLocal:0
2016-01-19 20:04:31,104 INFO [Thread-61]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
hdfs://hdnode01:54310
/tmp/hadoop-yarn/staging/hduser/.staging/job_1453244277886_0001
2016-01-19 20:04:31,133 INFO [Thread-61] org.apache.hadoop.ipc.Server:
Stopping server on 45584
2016-01-19 20:04:31,135 INFO [IPC Server listener on 45584]
org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45584
2016-01-19 20:04:31,135 INFO [IPC Server Responder]
org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-01-19 20:04:31,136 INFO [TaskHeartbeatHandler PingChecker]
org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
TaskHeartbeatHandler thread interrupted


Jps results, i believe that everything is ok, right?:
21267 DataNode
21609 ResourceManager
21974 JobHistoryServer
21735 NodeManager
24546 Jps
16532 org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
21121 NameNode
22098 QuorumPeerMain
21456 SecondaryNameNode

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Gaurav Gupta <ga...@gmail.com>.
Hi,

I think your yarn is not up and running so you are not able to run the
jobs. Can you please verify it?

Thanks


On Sat, Jan 16, 2016 at 3:18 PM, Namikaze Minato <ll...@gmail.com>
wrote:

> Hi again José Luis.
>
> Sorry, I was specifically talking about the
> "org.apache.hadoop.util.Shell$ExitCodeException" error.
> Can you provide the logs for a wordcount please?
> Also, do you have yarn running?
> I have never tweaked the mapreduce.framework.name value, so I might
> not be able to help you further, but these pieces of information might
> help the people who can.
>
> Regards,
> LLoyd
>
> On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Thanks for your answer Lloyd!
> >
> > I'm not sure about that. Wordcount, of the same jar, gives me the same
> > error, and also my own map reduce job.
> >
> > I believe that the " Error: Could not find or load main class 256" error
> is
> > happening because it's not finding the mapper, but i'm not sure.
> >
> > Bye!
> > Jose
> >
> >
> > 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
> >>
> >> Hello José Luis Larroque.
> >>
> >> Your problem here is only that grep is returning a non-zero exit code
> >> when no occurences are found.
> >> I know that for spark-streaming, using the option "-jobconf
> >> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> >> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
> >>
> >> Regards,
> >> LLoyd
> >>
> >> On 16 January 2016 at 19:07, José Luis Larroque <larroquester@gmail.com
> >
> >> wrote:
> >> > Hi there, i'm currently running a single node yarn cluster, hadoop
> >> > 2.4.0,
> >> > and for some reason, i can't execute even a example that comes with
> map
> >> > reduce (grep, wordcount, etc). With this line i execute grep:
> >> >
> >> >     $HADOOP_HOME/bin/yarn jar
> >> >
> >> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> >> > grep input output2 'dfs[a-z.]+'
> >> >
> >> > This cluster was previosly running Giraph programs, but rigth now i
> need
> >> > a
> >> > Map Reduce application, so i switched it back to pure yarn.
> >> >
> >> > All failed containers had the same error:
> >> >
> >> >     Container: container_1452447718890_0001_01_000002 on
> localhost_37976
> >> >
> >> > ======================================================================
> >> >     LogType: stderr
> >> >     LogLength: 45
> >> >     Log Contents:
> >> >     Error: Could not find or load main class 256
> >> >
> >> > Main logs:
> >> >
> >> >     SLF4J: Class path contains multiple SLF4J bindings.
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >> > explanation.
> >> >     SLF4J: Actual binding is of type
> [org.slf4j.impl.Log4jLoggerFactory]
> >> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> >> > native-hadoop library for your platform... using builtin-java classes
> >> > where
> >> > applicable
> >> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> >> > process : 1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0001
> >> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0001
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0001/
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0001
> >> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >> >         Job Counters
> >> >             Failed map tasks=4
> >> >             Launched map tasks=4
> >> >             Other local map tasks=3
> >> >             Data-local map tasks=1
> >> >             Total time spent by all maps in occupied slots (ms)=15548
> >> >             Total time spent by all reduces in occupied slots (ms)=0
> >> >             Total time spent by all map tasks (ms)=7774
> >> >             Total vcore-seconds taken by all map tasks=7774
> >> >             Total megabyte-seconds taken by all map tasks=3980288
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> >> > process : 0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0002/
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0002
> >> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >> >
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >> >         Job Counters
> >> >             Failed reduce tasks=4
> >> >             Launched reduce tasks=4
> >> >             Total time spent by all maps in occupied slots (ms)=0
> >> >             Total time spent by all reduces in occupied slots
> (ms)=11882
> >> >             Total time spent by all reduce tasks (ms)=5941
> >> >             Total vcore-seconds taken by all reduce tasks=5941
> >> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >
> >> > I switched mapreduce.framework.name from:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>yarn</value>
> >> > </property>
> >> >
> >> > To:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>local</value>
> >> > </property>
> >> >
> >> > and grep and other mapreduce jobs are working again.
> >> >
> >> > I don't understand why with "yarn" value in mapreduce.framework.name
> >> > doesn't
> >> > work, and without it (using "local") does.
> >> >
> >> > Any idea how to fix this without switching the value of
> >> > mapreduce.framework.name?
> >> >
> >> >
> >> >
> >> > Bye!
> >> > Jose
> >> >
> >> >
> >> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Gaurav Gupta <ga...@gmail.com>.
Hi,

I think your yarn is not up and running so you are not able to run the
jobs. Can you please verify it?

Thanks


On Sat, Jan 16, 2016 at 3:18 PM, Namikaze Minato <ll...@gmail.com>
wrote:

> Hi again José Luis.
>
> Sorry, I was specifically talking about the
> "org.apache.hadoop.util.Shell$ExitCodeException" error.
> Can you provide the logs for a wordcount please?
> Also, do you have yarn running?
> I have never tweaked the mapreduce.framework.name value, so I might
> not be able to help you further, but these pieces of information might
> help the people who can.
>
> Regards,
> LLoyd
>
> On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Thanks for your answer Lloyd!
> >
> > I'm not sure about that. Wordcount, of the same jar, gives me the same
> > error, and also my own map reduce job.
> >
> > I believe that the " Error: Could not find or load main class 256" error
> is
> > happening because it's not finding the mapper, but i'm not sure.
> >
> > Bye!
> > Jose
> >
> >
> > 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
> >>
> >> Hello José Luis Larroque.
> >>
> >> Your problem here is only that grep is returning a non-zero exit code
> >> when no occurences are found.
> >> I know that for spark-streaming, using the option "-jobconf
> >> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> >> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
> >>
> >> Regards,
> >> LLoyd
> >>
> >> On 16 January 2016 at 19:07, José Luis Larroque <larroquester@gmail.com
> >
> >> wrote:
> >> > Hi there, i'm currently running a single node yarn cluster, hadoop
> >> > 2.4.0,
> >> > and for some reason, i can't execute even a example that comes with
> map
> >> > reduce (grep, wordcount, etc). With this line i execute grep:
> >> >
> >> >     $HADOOP_HOME/bin/yarn jar
> >> >
> >> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> >> > grep input output2 'dfs[a-z.]+'
> >> >
> >> > This cluster was previosly running Giraph programs, but rigth now i
> need
> >> > a
> >> > Map Reduce application, so i switched it back to pure yarn.
> >> >
> >> > All failed containers had the same error:
> >> >
> >> >     Container: container_1452447718890_0001_01_000002 on
> localhost_37976
> >> >
> >> > ======================================================================
> >> >     LogType: stderr
> >> >     LogLength: 45
> >> >     Log Contents:
> >> >     Error: Could not find or load main class 256
> >> >
> >> > Main logs:
> >> >
> >> >     SLF4J: Class path contains multiple SLF4J bindings.
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >> > explanation.
> >> >     SLF4J: Actual binding is of type
> [org.slf4j.impl.Log4jLoggerFactory]
> >> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> >> > native-hadoop library for your platform... using builtin-java classes
> >> > where
> >> > applicable
> >> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> >> > process : 1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0001
> >> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0001
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0001/
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0001
> >> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >> >         Job Counters
> >> >             Failed map tasks=4
> >> >             Launched map tasks=4
> >> >             Other local map tasks=3
> >> >             Data-local map tasks=1
> >> >             Total time spent by all maps in occupied slots (ms)=15548
> >> >             Total time spent by all reduces in occupied slots (ms)=0
> >> >             Total time spent by all map tasks (ms)=7774
> >> >             Total vcore-seconds taken by all map tasks=7774
> >> >             Total megabyte-seconds taken by all map tasks=3980288
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> >> > process : 0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0002/
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0002
> >> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >> >
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >> >         Job Counters
> >> >             Failed reduce tasks=4
> >> >             Launched reduce tasks=4
> >> >             Total time spent by all maps in occupied slots (ms)=0
> >> >             Total time spent by all reduces in occupied slots
> (ms)=11882
> >> >             Total time spent by all reduce tasks (ms)=5941
> >> >             Total vcore-seconds taken by all reduce tasks=5941
> >> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >
> >> > I switched mapreduce.framework.name from:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>yarn</value>
> >> > </property>
> >> >
> >> > To:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>local</value>
> >> > </property>
> >> >
> >> > and grep and other mapreduce jobs are working again.
> >> >
> >> > I don't understand why with "yarn" value in mapreduce.framework.name
> >> > doesn't
> >> > work, and without it (using "local") does.
> >> >
> >> > Any idea how to fix this without switching the value of
> >> > mapreduce.framework.name?
> >> >
> >> >
> >> >
> >> > Bye!
> >> > Jose
> >> >
> >> >
> >> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Gaurav Gupta <ga...@gmail.com>.
Hi,

I think your yarn is not up and running so you are not able to run the
jobs. Can you please verify it?

Thanks


On Sat, Jan 16, 2016 at 3:18 PM, Namikaze Minato <ll...@gmail.com>
wrote:

> Hi again José Luis.
>
> Sorry, I was specifically talking about the
> "org.apache.hadoop.util.Shell$ExitCodeException" error.
> Can you provide the logs for a wordcount please?
> Also, do you have yarn running?
> I have never tweaked the mapreduce.framework.name value, so I might
> not be able to help you further, but these pieces of information might
> help the people who can.
>
> Regards,
> LLoyd
>
> On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Thanks for your answer Lloyd!
> >
> > I'm not sure about that. Wordcount, of the same jar, gives me the same
> > error, and also my own map reduce job.
> >
> > I believe that the " Error: Could not find or load main class 256" error
> is
> > happening because it's not finding the mapper, but i'm not sure.
> >
> > Bye!
> > Jose
> >
> >
> > 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
> >>
> >> Hello José Luis Larroque.
> >>
> >> Your problem here is only that grep is returning a non-zero exit code
> >> when no occurences are found.
> >> I know that for spark-streaming, using the option "-jobconf
> >> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> >> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
> >>
> >> Regards,
> >> LLoyd
> >>
> >> On 16 January 2016 at 19:07, José Luis Larroque <larroquester@gmail.com
> >
> >> wrote:
> >> > Hi there, i'm currently running a single node yarn cluster, hadoop
> >> > 2.4.0,
> >> > and for some reason, i can't execute even a example that comes with
> map
> >> > reduce (grep, wordcount, etc). With this line i execute grep:
> >> >
> >> >     $HADOOP_HOME/bin/yarn jar
> >> >
> >> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> >> > grep input output2 'dfs[a-z.]+'
> >> >
> >> > This cluster was previosly running Giraph programs, but rigth now i
> need
> >> > a
> >> > Map Reduce application, so i switched it back to pure yarn.
> >> >
> >> > All failed containers had the same error:
> >> >
> >> >     Container: container_1452447718890_0001_01_000002 on
> localhost_37976
> >> >
> >> > ======================================================================
> >> >     LogType: stderr
> >> >     LogLength: 45
> >> >     Log Contents:
> >> >     Error: Could not find or load main class 256
> >> >
> >> > Main logs:
> >> >
> >> >     SLF4J: Class path contains multiple SLF4J bindings.
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >> > explanation.
> >> >     SLF4J: Actual binding is of type
> [org.slf4j.impl.Log4jLoggerFactory]
> >> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> >> > native-hadoop library for your platform... using builtin-java classes
> >> > where
> >> > applicable
> >> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> >> > process : 1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0001
> >> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0001
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0001/
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0001
> >> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >> >         Job Counters
> >> >             Failed map tasks=4
> >> >             Launched map tasks=4
> >> >             Other local map tasks=3
> >> >             Data-local map tasks=1
> >> >             Total time spent by all maps in occupied slots (ms)=15548
> >> >             Total time spent by all reduces in occupied slots (ms)=0
> >> >             Total time spent by all map tasks (ms)=7774
> >> >             Total vcore-seconds taken by all map tasks=7774
> >> >             Total megabyte-seconds taken by all map tasks=3980288
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> >> > process : 0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0002/
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0002
> >> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >> >
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >> >         Job Counters
> >> >             Failed reduce tasks=4
> >> >             Launched reduce tasks=4
> >> >             Total time spent by all maps in occupied slots (ms)=0
> >> >             Total time spent by all reduces in occupied slots
> (ms)=11882
> >> >             Total time spent by all reduce tasks (ms)=5941
> >> >             Total vcore-seconds taken by all reduce tasks=5941
> >> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >
> >> > I switched mapreduce.framework.name from:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>yarn</value>
> >> > </property>
> >> >
> >> > To:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>local</value>
> >> > </property>
> >> >
> >> > and grep and other mapreduce jobs are working again.
> >> >
> >> > I don't understand why with "yarn" value in mapreduce.framework.name
> >> > doesn't
> >> > work, and without it (using "local") does.
> >> >
> >> > Any idea how to fix this without switching the value of
> >> > mapreduce.framework.name?
> >> >
> >> >
> >> >
> >> > Bye!
> >> > Jose
> >> >
> >> >
> >> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Gaurav Gupta <ga...@gmail.com>.
Hi,

I think your yarn is not up and running so you are not able to run the
jobs. Can you please verify it?

Thanks


On Sat, Jan 16, 2016 at 3:18 PM, Namikaze Minato <ll...@gmail.com>
wrote:

> Hi again José Luis.
>
> Sorry, I was specifically talking about the
> "org.apache.hadoop.util.Shell$ExitCodeException" error.
> Can you provide the logs for a wordcount please?
> Also, do you have yarn running?
> I have never tweaked the mapreduce.framework.name value, so I might
> not be able to help you further, but these pieces of information might
> help the people who can.
>
> Regards,
> LLoyd
>
> On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Thanks for your answer Lloyd!
> >
> > I'm not sure about that. Wordcount, of the same jar, gives me the same
> > error, and also my own map reduce job.
> >
> > I believe that the " Error: Could not find or load main class 256" error
> is
> > happening because it's not finding the mapper, but i'm not sure.
> >
> > Bye!
> > Jose
> >
> >
> > 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
> >>
> >> Hello José Luis Larroque.
> >>
> >> Your problem here is only that grep is returning a non-zero exit code
> >> when no occurences are found.
> >> I know that for spark-streaming, using the option "-jobconf
> >> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> >> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
> >>
> >> Regards,
> >> LLoyd
> >>
> >> On 16 January 2016 at 19:07, José Luis Larroque <larroquester@gmail.com
> >
> >> wrote:
> >> > Hi there, i'm currently running a single node yarn cluster, hadoop
> >> > 2.4.0,
> >> > and for some reason, i can't execute even a example that comes with
> map
> >> > reduce (grep, wordcount, etc). With this line i execute grep:
> >> >
> >> >     $HADOOP_HOME/bin/yarn jar
> >> >
> >> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> >> > grep input output2 'dfs[a-z.]+'
> >> >
> >> > This cluster was previosly running Giraph programs, but rigth now i
> need
> >> > a
> >> > Map Reduce application, so i switched it back to pure yarn.
> >> >
> >> > All failed containers had the same error:
> >> >
> >> >     Container: container_1452447718890_0001_01_000002 on
> localhost_37976
> >> >
> >> > ======================================================================
> >> >     LogType: stderr
> >> >     LogLength: 45
> >> >     Log Contents:
> >> >     Error: Could not find or load main class 256
> >> >
> >> > Main logs:
> >> >
> >> >     SLF4J: Class path contains multiple SLF4J bindings.
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >> > explanation.
> >> >     SLF4J: Actual binding is of type
> [org.slf4j.impl.Log4jLoggerFactory]
> >> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> >> > native-hadoop library for your platform... using builtin-java classes
> >> > where
> >> > applicable
> >> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> >> > process : 1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0001
> >> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0001
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0001/
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0001
> >> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >> >         Job Counters
> >> >             Failed map tasks=4
> >> >             Launched map tasks=4
> >> >             Other local map tasks=3
> >> >             Data-local map tasks=1
> >> >             Total time spent by all maps in occupied slots (ms)=15548
> >> >             Total time spent by all reduces in occupied slots (ms)=0
> >> >             Total time spent by all map tasks (ms)=7774
> >> >             Total vcore-seconds taken by all map tasks=7774
> >> >             Total megabyte-seconds taken by all map tasks=3980288
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> >> > process : 0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0002/
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0002
> >> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >> >
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >> >         Job Counters
> >> >             Failed reduce tasks=4
> >> >             Launched reduce tasks=4
> >> >             Total time spent by all maps in occupied slots (ms)=0
> >> >             Total time spent by all reduces in occupied slots
> (ms)=11882
> >> >             Total time spent by all reduce tasks (ms)=5941
> >> >             Total vcore-seconds taken by all reduce tasks=5941
> >> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >
> >> > I switched mapreduce.framework.name from:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>yarn</value>
> >> > </property>
> >> >
> >> > To:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>local</value>
> >> > </property>
> >> >
> >> > and grep and other mapreduce jobs are working again.
> >> >
> >> > I don't understand why with "yarn" value in mapreduce.framework.name
> >> > doesn't
> >> > work, and without it (using "local") does.
> >> >
> >> > Any idea how to fix this without switching the value of
> >> > mapreduce.framework.name?
> >> >
> >> >
> >> >
> >> > Bye!
> >> > Jose
> >> >
> >> >
> >> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hi again José Luis.

Sorry, I was specifically talking about the
"org.apache.hadoop.util.Shell$ExitCodeException" error.
Can you provide the logs for a wordcount please?
Also, do you have yarn running?
I have never tweaked the mapreduce.framework.name value, so I might
not be able to help you further, but these pieces of information might
help the people who can.

Regards,
LLoyd

On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com> wrote:
> Thanks for your answer Lloyd!
>
> I'm not sure about that. Wordcount, of the same jar, gives me the same
> error, and also my own map reduce job.
>
> I believe that the " Error: Could not find or load main class 256" error is
> happening because it's not finding the mapper, but i'm not sure.
>
> Bye!
> Jose
>
>
> 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>> Hello José Luis Larroque.
>>
>> Your problem here is only that grep is returning a non-zero exit code
>> when no occurences are found.
>> I know that for spark-streaming, using the option "-jobconf
>> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
>> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>>
>> Regards,
>> LLoyd
>>
>> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi there, i'm currently running a single node yarn cluster, hadoop
>> > 2.4.0,
>> > and for some reason, i can't execute even a example that comes with map
>> > reduce (grep, wordcount, etc). With this line i execute grep:
>> >
>> >     $HADOOP_HOME/bin/yarn jar
>> >
>> > /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
>> > grep input output2 'dfs[a-z.]+'
>> >
>> > This cluster was previosly running Giraph programs, but rigth now i need
>> > a
>> > Map Reduce application, so i switched it back to pure yarn.
>> >
>> > All failed containers had the same error:
>> >
>> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
>> >
>> > ======================================================================
>> >     LogType: stderr
>> >     LogLength: 45
>> >     Log Contents:
>> >     Error: Could not find or load main class 256
>> >
>> > Main logs:
>> >
>> >     SLF4J: Class path contains multiple SLF4J bindings.
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
>> > native-hadoop library for your platform... using builtin-java classes
>> > where
>> > applicable
>> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
>> > process : 1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0001
>> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0001
>> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0001/
>> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0001
>> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>> >         Job Counters
>> >             Failed map tasks=4
>> >             Launched map tasks=4
>> >             Other local map tasks=3
>> >             Data-local map tasks=1
>> >             Total time spent by all maps in occupied slots (ms)=15548
>> >             Total time spent by all reduces in occupied slots (ms)=0
>> >             Total time spent by all map tasks (ms)=7774
>> >             Total vcore-seconds taken by all map tasks=7774
>> >             Total megabyte-seconds taken by all map tasks=3980288
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
>> > process : 0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0002/
>> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0002
>> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
>> >
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>> >         Job Counters
>> >             Failed reduce tasks=4
>> >             Launched reduce tasks=4
>> >             Total time spent by all maps in occupied slots (ms)=0
>> >             Total time spent by all reduces in occupied slots (ms)=11882
>> >             Total time spent by all reduce tasks (ms)=5941
>> >             Total vcore-seconds taken by all reduce tasks=5941
>> >             Total megabyte-seconds taken by all reduce tasks=3041792
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >
>> > I switched mapreduce.framework.name from:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>yarn</value>
>> > </property>
>> >
>> > To:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>local</value>
>> > </property>
>> >
>> > and grep and other mapreduce jobs are working again.
>> >
>> > I don't understand why with "yarn" value in mapreduce.framework.name
>> > doesn't
>> > work, and without it (using "local") does.
>> >
>> > Any idea how to fix this without switching the value of
>> > mapreduce.framework.name?
>> >
>> >
>> >
>> > Bye!
>> > Jose
>> >
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hi again José Luis.

Sorry, I was specifically talking about the
"org.apache.hadoop.util.Shell$ExitCodeException" error.
Can you provide the logs for a wordcount please?
Also, do you have yarn running?
I have never tweaked the mapreduce.framework.name value, so I might
not be able to help you further, but these pieces of information might
help the people who can.

Regards,
LLoyd

On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com> wrote:
> Thanks for your answer Lloyd!
>
> I'm not sure about that. Wordcount, of the same jar, gives me the same
> error, and also my own map reduce job.
>
> I believe that the " Error: Could not find or load main class 256" error is
> happening because it's not finding the mapper, but i'm not sure.
>
> Bye!
> Jose
>
>
> 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>> Hello José Luis Larroque.
>>
>> Your problem here is only that grep is returning a non-zero exit code
>> when no occurences are found.
>> I know that for spark-streaming, using the option "-jobconf
>> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
>> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>>
>> Regards,
>> LLoyd
>>
>> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi there, i'm currently running a single node yarn cluster, hadoop
>> > 2.4.0,
>> > and for some reason, i can't execute even a example that comes with map
>> > reduce (grep, wordcount, etc). With this line i execute grep:
>> >
>> >     $HADOOP_HOME/bin/yarn jar
>> >
>> > /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
>> > grep input output2 'dfs[a-z.]+'
>> >
>> > This cluster was previosly running Giraph programs, but rigth now i need
>> > a
>> > Map Reduce application, so i switched it back to pure yarn.
>> >
>> > All failed containers had the same error:
>> >
>> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
>> >
>> > ======================================================================
>> >     LogType: stderr
>> >     LogLength: 45
>> >     Log Contents:
>> >     Error: Could not find or load main class 256
>> >
>> > Main logs:
>> >
>> >     SLF4J: Class path contains multiple SLF4J bindings.
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
>> > native-hadoop library for your platform... using builtin-java classes
>> > where
>> > applicable
>> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
>> > process : 1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0001
>> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0001
>> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0001/
>> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0001
>> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>> >         Job Counters
>> >             Failed map tasks=4
>> >             Launched map tasks=4
>> >             Other local map tasks=3
>> >             Data-local map tasks=1
>> >             Total time spent by all maps in occupied slots (ms)=15548
>> >             Total time spent by all reduces in occupied slots (ms)=0
>> >             Total time spent by all map tasks (ms)=7774
>> >             Total vcore-seconds taken by all map tasks=7774
>> >             Total megabyte-seconds taken by all map tasks=3980288
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
>> > process : 0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0002/
>> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0002
>> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
>> >
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>> >         Job Counters
>> >             Failed reduce tasks=4
>> >             Launched reduce tasks=4
>> >             Total time spent by all maps in occupied slots (ms)=0
>> >             Total time spent by all reduces in occupied slots (ms)=11882
>> >             Total time spent by all reduce tasks (ms)=5941
>> >             Total vcore-seconds taken by all reduce tasks=5941
>> >             Total megabyte-seconds taken by all reduce tasks=3041792
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >
>> > I switched mapreduce.framework.name from:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>yarn</value>
>> > </property>
>> >
>> > To:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>local</value>
>> > </property>
>> >
>> > and grep and other mapreduce jobs are working again.
>> >
>> > I don't understand why with "yarn" value in mapreduce.framework.name
>> > doesn't
>> > work, and without it (using "local") does.
>> >
>> > Any idea how to fix this without switching the value of
>> > mapreduce.framework.name?
>> >
>> >
>> >
>> > Bye!
>> > Jose
>> >
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hi again José Luis.

Sorry, I was specifically talking about the
"org.apache.hadoop.util.Shell$ExitCodeException" error.
Can you provide the logs for a wordcount please?
Also, do you have yarn running?
I have never tweaked the mapreduce.framework.name value, so I might
not be able to help you further, but these pieces of information might
help the people who can.

Regards,
LLoyd

On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com> wrote:
> Thanks for your answer Lloyd!
>
> I'm not sure about that. Wordcount, of the same jar, gives me the same
> error, and also my own map reduce job.
>
> I believe that the " Error: Could not find or load main class 256" error is
> happening because it's not finding the mapper, but i'm not sure.
>
> Bye!
> Jose
>
>
> 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>> Hello José Luis Larroque.
>>
>> Your problem here is only that grep is returning a non-zero exit code
>> when no occurences are found.
>> I know that for spark-streaming, using the option "-jobconf
>> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
>> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>>
>> Regards,
>> LLoyd
>>
>> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi there, i'm currently running a single node yarn cluster, hadoop
>> > 2.4.0,
>> > and for some reason, i can't execute even a example that comes with map
>> > reduce (grep, wordcount, etc). With this line i execute grep:
>> >
>> >     $HADOOP_HOME/bin/yarn jar
>> >
>> > /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
>> > grep input output2 'dfs[a-z.]+'
>> >
>> > This cluster was previosly running Giraph programs, but rigth now i need
>> > a
>> > Map Reduce application, so i switched it back to pure yarn.
>> >
>> > All failed containers had the same error:
>> >
>> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
>> >
>> > ======================================================================
>> >     LogType: stderr
>> >     LogLength: 45
>> >     Log Contents:
>> >     Error: Could not find or load main class 256
>> >
>> > Main logs:
>> >
>> >     SLF4J: Class path contains multiple SLF4J bindings.
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
>> > native-hadoop library for your platform... using builtin-java classes
>> > where
>> > applicable
>> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
>> > process : 1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0001
>> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0001
>> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0001/
>> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0001
>> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>> >         Job Counters
>> >             Failed map tasks=4
>> >             Launched map tasks=4
>> >             Other local map tasks=3
>> >             Data-local map tasks=1
>> >             Total time spent by all maps in occupied slots (ms)=15548
>> >             Total time spent by all reduces in occupied slots (ms)=0
>> >             Total time spent by all map tasks (ms)=7774
>> >             Total vcore-seconds taken by all map tasks=7774
>> >             Total megabyte-seconds taken by all map tasks=3980288
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
>> > process : 0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0002/
>> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0002
>> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
>> >
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>> >         Job Counters
>> >             Failed reduce tasks=4
>> >             Launched reduce tasks=4
>> >             Total time spent by all maps in occupied slots (ms)=0
>> >             Total time spent by all reduces in occupied slots (ms)=11882
>> >             Total time spent by all reduce tasks (ms)=5941
>> >             Total vcore-seconds taken by all reduce tasks=5941
>> >             Total megabyte-seconds taken by all reduce tasks=3041792
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >
>> > I switched mapreduce.framework.name from:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>yarn</value>
>> > </property>
>> >
>> > To:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>local</value>
>> > </property>
>> >
>> > and grep and other mapreduce jobs are working again.
>> >
>> > I don't understand why with "yarn" value in mapreduce.framework.name
>> > doesn't
>> > work, and without it (using "local") does.
>> >
>> > Any idea how to fix this without switching the value of
>> > mapreduce.framework.name?
>> >
>> >
>> >
>> > Bye!
>> > Jose
>> >
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hi again José Luis.

Sorry, I was specifically talking about the
"org.apache.hadoop.util.Shell$ExitCodeException" error.
Can you provide the logs for a wordcount please?
Also, do you have yarn running?
I have never tweaked the mapreduce.framework.name value, so I might
not be able to help you further, but these pieces of information might
help the people who can.

Regards,
LLoyd

On 17 January 2016 at 00:07, José Luis Larroque <la...@gmail.com> wrote:
> Thanks for your answer Lloyd!
>
> I'm not sure about that. Wordcount, of the same jar, gives me the same
> error, and also my own map reduce job.
>
> I believe that the " Error: Could not find or load main class 256" error is
> happening because it's not finding the mapper, but i'm not sure.
>
> Bye!
> Jose
>
>
> 2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:
>>
>> Hello José Luis Larroque.
>>
>> Your problem here is only that grep is returning a non-zero exit code
>> when no occurences are found.
>> I know that for spark-streaming, using the option "-jobconf
>> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
>> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>>
>> Regards,
>> LLoyd
>>
>> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
>> wrote:
>> > Hi there, i'm currently running a single node yarn cluster, hadoop
>> > 2.4.0,
>> > and for some reason, i can't execute even a example that comes with map
>> > reduce (grep, wordcount, etc). With this line i execute grep:
>> >
>> >     $HADOOP_HOME/bin/yarn jar
>> >
>> > /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
>> > grep input output2 'dfs[a-z.]+'
>> >
>> > This cluster was previosly running Giraph programs, but rigth now i need
>> > a
>> > Map Reduce application, so i switched it back to pure yarn.
>> >
>> > All failed containers had the same error:
>> >
>> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
>> >
>> > ======================================================================
>> >     LogType: stderr
>> >     LogLength: 45
>> >     Log Contents:
>> >     Error: Could not find or load main class 256
>> >
>> > Main logs:
>> >
>> >     SLF4J: Class path contains multiple SLF4J bindings.
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: Found binding in
>> >
>> > [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
>> > native-hadoop library for your platform... using builtin-java classes
>> > where
>> > applicable
>> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
>> > process : 1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0001
>> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0001
>> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0001/
>> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0001
>> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
>> >
>> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>> >         Job Counters
>> >             Failed map tasks=4
>> >             Launched map tasks=4
>> >             Other local map tasks=3
>> >             Data-local map tasks=1
>> >             Total time spent by all maps in occupied slots (ms)=15548
>> >             Total time spent by all reduces in occupied slots (ms)=0
>> >             Total time spent by all map tasks (ms)=7774
>> >             Total vcore-seconds taken by all map tasks=7774
>> >             Total megabyte-seconds taken by all map tasks=3980288
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
>> > at
>> > hdnode01/192.168.0.10:8050
>> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
>> > User classes may not be found. See Job or Job#setJar(String).
>> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
>> > process : 0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
>> > job: job_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
>> > Not
>> > adding any jar to the list of resources.
>> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
>> > application_1452905418747_0002
>> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
>> > http://localhost:8088/proxy/application_1452905418747_0002/
>> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
>> > job_1452905418747_0002
>> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
>> > running
>> > in uber mode : false
>> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
>> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
>> >     Exception from container-launch:
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >     org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> >         at
>> >
>> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>> >         at
>> >
>> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >         at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >     Container exited with a non-zero exit code 1
>> >
>> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
>> > failed
>> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
>> >
>> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>> >         Job Counters
>> >             Failed reduce tasks=4
>> >             Launched reduce tasks=4
>> >             Total time spent by all maps in occupied slots (ms)=0
>> >             Total time spent by all reduces in occupied slots (ms)=11882
>> >             Total time spent by all reduce tasks (ms)=5941
>> >             Total vcore-seconds taken by all reduce tasks=5941
>> >             Total megabyte-seconds taken by all reduce tasks=3041792
>> >         Map-Reduce Framework
>> >             CPU time spent (ms)=0
>> >             Physical memory (bytes) snapshot=0
>> >             Virtual memory (bytes) snapshot=0
>> >
>> > I switched mapreduce.framework.name from:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>yarn</value>
>> > </property>
>> >
>> > To:
>> >
>> > <property>
>> > <name>mapreduce.framework.name</name>
>> > <value>local</value>
>> > </property>
>> >
>> > and grep and other mapreduce jobs are working again.
>> >
>> > I don't understand why with "yarn" value in mapreduce.framework.name
>> > doesn't
>> > work, and without it (using "local") does.
>> >
>> > Any idea how to fix this without switching the value of
>> > mapreduce.framework.name?
>> >
>> >
>> >
>> > Bye!
>> > Jose
>> >
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks for your answer Lloyd!

I'm not sure about that. Wordcount, of the same jar, gives me the same
error, and also my own map reduce job.

I believe that the "* Error: Could not find or load main class 256*" error
is happening because it's not finding the mapper, but i'm not sure.

Bye!
Jose


2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> Hello José Luis Larroque.
>
> Your problem here is only that grep is returning a non-zero exit code
> when no occurences are found.
> I know that for spark-streaming, using the option "-jobconf
> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>
> Regards,
> LLoyd
>
> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> > and for some reason, i can't execute even a example that comes with map
> > reduce (grep, wordcount, etc). With this line i execute grep:
> >
> >     $HADOOP_HOME/bin/yarn jar
> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> > grep input output2 'dfs[a-z.]+'
> >
> > This cluster was previosly running Giraph programs, but rigth now i need
> a
> > Map Reduce application, so i switched it back to pure yarn.
> >
> > All failed containers had the same error:
> >
> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
> >
>  ======================================================================
> >     LogType: stderr
> >     LogLength: 45
> >     Log Contents:
> >     Error: Could not find or load main class 256
> >
> > Main logs:
> >
> >     SLF4J: Class path contains multiple SLF4J bindings.
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java classes
> where
> > applicable
> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> > process : 1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0001
> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0001
> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0001/
> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> > job_1452905418747_0001
> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> running
> > in uber mode : false
> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> failed
> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >         Job Counters
> >             Failed map tasks=4
> >             Launched map tasks=4
> >             Other local map tasks=3
> >             Data-local map tasks=1
> >             Total time spent by all maps in occupied slots (ms)=15548
> >             Total time spent by all reduces in occupied slots (ms)=0
> >             Total time spent by all map tasks (ms)=7774
> >             Total vcore-seconds taken by all map tasks=7774
> >             Total megabyte-seconds taken by all map tasks=3980288
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> > process : 0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0002/
> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> > job_1452905418747_0002
> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> running
> > in uber mode : false
> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> failed
> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >
> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >         Job Counters
> >             Failed reduce tasks=4
> >             Launched reduce tasks=4
> >             Total time spent by all maps in occupied slots (ms)=0
> >             Total time spent by all reduces in occupied slots (ms)=11882
> >             Total time spent by all reduce tasks (ms)=5941
> >             Total vcore-seconds taken by all reduce tasks=5941
> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >
> > I switched mapreduce.framework.name from:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>yarn</value>
> > </property>
> >
> > To:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>local</value>
> > </property>
> >
> > and grep and other mapreduce jobs are working again.
> >
> > I don't understand why with "yarn" value in mapreduce.framework.name
> doesn't
> > work, and without it (using "local") does.
> >
> > Any idea how to fix this without switching the value of
> > mapreduce.framework.name?
> >
> >
> >
> > Bye!
> > Jose
> >
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks for your answer Lloyd!

I'm not sure about that. Wordcount, of the same jar, gives me the same
error, and also my own map reduce job.

I believe that the "* Error: Could not find or load main class 256*" error
is happening because it's not finding the mapper, but i'm not sure.

Bye!
Jose


2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> Hello José Luis Larroque.
>
> Your problem here is only that grep is returning a non-zero exit code
> when no occurences are found.
> I know that for spark-streaming, using the option "-jobconf
> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>
> Regards,
> LLoyd
>
> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> > and for some reason, i can't execute even a example that comes with map
> > reduce (grep, wordcount, etc). With this line i execute grep:
> >
> >     $HADOOP_HOME/bin/yarn jar
> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> > grep input output2 'dfs[a-z.]+'
> >
> > This cluster was previosly running Giraph programs, but rigth now i need
> a
> > Map Reduce application, so i switched it back to pure yarn.
> >
> > All failed containers had the same error:
> >
> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
> >
>  ======================================================================
> >     LogType: stderr
> >     LogLength: 45
> >     Log Contents:
> >     Error: Could not find or load main class 256
> >
> > Main logs:
> >
> >     SLF4J: Class path contains multiple SLF4J bindings.
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java classes
> where
> > applicable
> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> > process : 1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0001
> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0001
> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0001/
> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> > job_1452905418747_0001
> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> running
> > in uber mode : false
> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> failed
> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >         Job Counters
> >             Failed map tasks=4
> >             Launched map tasks=4
> >             Other local map tasks=3
> >             Data-local map tasks=1
> >             Total time spent by all maps in occupied slots (ms)=15548
> >             Total time spent by all reduces in occupied slots (ms)=0
> >             Total time spent by all map tasks (ms)=7774
> >             Total vcore-seconds taken by all map tasks=7774
> >             Total megabyte-seconds taken by all map tasks=3980288
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> > process : 0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0002/
> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> > job_1452905418747_0002
> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> running
> > in uber mode : false
> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> failed
> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >
> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >         Job Counters
> >             Failed reduce tasks=4
> >             Launched reduce tasks=4
> >             Total time spent by all maps in occupied slots (ms)=0
> >             Total time spent by all reduces in occupied slots (ms)=11882
> >             Total time spent by all reduce tasks (ms)=5941
> >             Total vcore-seconds taken by all reduce tasks=5941
> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >
> > I switched mapreduce.framework.name from:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>yarn</value>
> > </property>
> >
> > To:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>local</value>
> > </property>
> >
> > and grep and other mapreduce jobs are working again.
> >
> > I don't understand why with "yarn" value in mapreduce.framework.name
> doesn't
> > work, and without it (using "local") does.
> >
> > Any idea how to fix this without switching the value of
> > mapreduce.framework.name?
> >
> >
> >
> > Bye!
> > Jose
> >
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks for your answer Lloyd!

I'm not sure about that. Wordcount, of the same jar, gives me the same
error, and also my own map reduce job.

I believe that the "* Error: Could not find or load main class 256*" error
is happening because it's not finding the mapper, but i'm not sure.

Bye!
Jose


2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> Hello José Luis Larroque.
>
> Your problem here is only that grep is returning a non-zero exit code
> when no occurences are found.
> I know that for spark-streaming, using the option "-jobconf
> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>
> Regards,
> LLoyd
>
> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> > and for some reason, i can't execute even a example that comes with map
> > reduce (grep, wordcount, etc). With this line i execute grep:
> >
> >     $HADOOP_HOME/bin/yarn jar
> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> > grep input output2 'dfs[a-z.]+'
> >
> > This cluster was previosly running Giraph programs, but rigth now i need
> a
> > Map Reduce application, so i switched it back to pure yarn.
> >
> > All failed containers had the same error:
> >
> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
> >
>  ======================================================================
> >     LogType: stderr
> >     LogLength: 45
> >     Log Contents:
> >     Error: Could not find or load main class 256
> >
> > Main logs:
> >
> >     SLF4J: Class path contains multiple SLF4J bindings.
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java classes
> where
> > applicable
> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> > process : 1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0001
> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0001
> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0001/
> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> > job_1452905418747_0001
> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> running
> > in uber mode : false
> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> failed
> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >         Job Counters
> >             Failed map tasks=4
> >             Launched map tasks=4
> >             Other local map tasks=3
> >             Data-local map tasks=1
> >             Total time spent by all maps in occupied slots (ms)=15548
> >             Total time spent by all reduces in occupied slots (ms)=0
> >             Total time spent by all map tasks (ms)=7774
> >             Total vcore-seconds taken by all map tasks=7774
> >             Total megabyte-seconds taken by all map tasks=3980288
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> > process : 0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0002/
> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> > job_1452905418747_0002
> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> running
> > in uber mode : false
> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> failed
> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >
> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >         Job Counters
> >             Failed reduce tasks=4
> >             Launched reduce tasks=4
> >             Total time spent by all maps in occupied slots (ms)=0
> >             Total time spent by all reduces in occupied slots (ms)=11882
> >             Total time spent by all reduce tasks (ms)=5941
> >             Total vcore-seconds taken by all reduce tasks=5941
> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >
> > I switched mapreduce.framework.name from:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>yarn</value>
> > </property>
> >
> > To:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>local</value>
> > </property>
> >
> > and grep and other mapreduce jobs are working again.
> >
> > I don't understand why with "yarn" value in mapreduce.framework.name
> doesn't
> > work, and without it (using "local") does.
> >
> > Any idea how to fix this without switching the value of
> > mapreduce.framework.name?
> >
> >
> >
> > Bye!
> > Jose
> >
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by José Luis Larroque <la...@gmail.com>.
Thanks for your answer Lloyd!

I'm not sure about that. Wordcount, of the same jar, gives me the same
error, and also my own map reduce job.

I believe that the "* Error: Could not find or load main class 256*" error
is happening because it's not finding the mapper, but i'm not sure.

Bye!
Jose


2016-01-16 19:41 GMT-03:00 Namikaze Minato <ll...@gmail.com>:

> Hello José Luis Larroque.
>
> Your problem here is only that grep is returning a non-zero exit code
> when no occurences are found.
> I know that for spark-streaming, using the option "-jobconf
> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
>
> Regards,
> LLoyd
>
> On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com>
> wrote:
> > Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> > and for some reason, i can't execute even a example that comes with map
> > reduce (grep, wordcount, etc). With this line i execute grep:
> >
> >     $HADOOP_HOME/bin/yarn jar
> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> > grep input output2 'dfs[a-z.]+'
> >
> > This cluster was previosly running Giraph programs, but rigth now i need
> a
> > Map Reduce application, so i switched it back to pure yarn.
> >
> > All failed containers had the same error:
> >
> >     Container: container_1452447718890_0001_01_000002 on localhost_37976
> >
>  ======================================================================
> >     LogType: stderr
> >     LogLength: 45
> >     Log Contents:
> >     Error: Could not find or load main class 256
> >
> > Main logs:
> >
> >     SLF4J: Class path contains multiple SLF4J bindings.
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: Found binding in
> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java classes
> where
> > applicable
> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> > process : 1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0001
> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0001
> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0001/
> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> > job_1452905418747_0001
> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> running
> > in uber mode : false
> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> failed
> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >
> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >         Job Counters
> >             Failed map tasks=4
> >             Launched map tasks=4
> >             Other local map tasks=3
> >             Data-local map tasks=1
> >             Total time spent by all maps in occupied slots (ms)=15548
> >             Total time spent by all reduces in occupied slots (ms)=0
> >             Total time spent by all map tasks (ms)=7774
> >             Total vcore-seconds taken by all map tasks=7774
> >             Total megabyte-seconds taken by all map tasks=3980288
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager
> at
> > hdnode01/192.168.0.10:8050
> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> > User classes may not be found. See Job or Job#setJar(String).
> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> > process : 0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> > adding any jar to the list of resources.
> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> > application_1452905418747_0002
> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> > http://localhost:8088/proxy/application_1452905418747_0002/
> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> > job_1452905418747_0002
> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> running
> > in uber mode : false
> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >     Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >         at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >         at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> >
> >     Container exited with a non-zero exit code 1
> >
> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> failed
> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >
> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >         Job Counters
> >             Failed reduce tasks=4
> >             Launched reduce tasks=4
> >             Total time spent by all maps in occupied slots (ms)=0
> >             Total time spent by all reduces in occupied slots (ms)=11882
> >             Total time spent by all reduce tasks (ms)=5941
> >             Total vcore-seconds taken by all reduce tasks=5941
> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >         Map-Reduce Framework
> >             CPU time spent (ms)=0
> >             Physical memory (bytes) snapshot=0
> >             Virtual memory (bytes) snapshot=0
> >
> > I switched mapreduce.framework.name from:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>yarn</value>
> > </property>
> >
> > To:
> >
> > <property>
> > <name>mapreduce.framework.name</name>
> > <value>local</value>
> > </property>
> >
> > and grep and other mapreduce jobs are working again.
> >
> > I don't understand why with "yarn" value in mapreduce.framework.name
> doesn't
> > work, and without it (using "local") does.
> >
> > Any idea how to fix this without switching the value of
> > mapreduce.framework.name?
> >
> >
> >
> > Bye!
> > Jose
> >
> >
> >
>

Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hello José Luis Larroque.

Your problem here is only that grep is returning a non-zero exit code
when no occurences are found.
I know that for spark-streaming, using the option "-jobconf
stream.non.zero.exit.is.failure=false" solves the problem, but I don't
know how hadoop-mapreduce-examples-2.4.0.jar handles this.

Regards,
LLoyd

On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com> wrote:
> Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> and for some reason, i can't execute even a example that comes with map
> reduce (grep, wordcount, etc). With this line i execute grep:
>
>     $HADOOP_HOME/bin/yarn jar
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> grep input output2 'dfs[a-z.]+'
>
> This cluster was previosly running Giraph programs, but rigth now i need a
> Map Reduce application, so i switched it back to pure yarn.
>
> All failed containers had the same error:
>
>     Container: container_1452447718890_0001_01_000002 on localhost_37976
>     ======================================================================
>     LogType: stderr
>     LogLength: 45
>     Log Contents:
>     Error: Could not find or load main class 256
>
> Main logs:
>
>     SLF4J: Class path contains multiple SLF4J bindings.
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> process : 1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0001
>     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0001
>     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0001/
>     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> job_1452905418747_0001
>     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running
> in uber mode : false
>     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
> with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>     Job failed as tasks failed. failedMaps:1 failedReduces:0
>
>     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>         Job Counters
>             Failed map tasks=4
>             Launched map tasks=4
>             Other local map tasks=3
>             Data-local map tasks=1
>             Total time spent by all maps in occupied slots (ms)=15548
>             Total time spent by all reduces in occupied slots (ms)=0
>             Total time spent by all map tasks (ms)=7774
>             Total vcore-seconds taken by all map tasks=7774
>             Total megabyte-seconds taken by all map tasks=3980288
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> process : 0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0002
>     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0002
>     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0002/
>     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> job_1452905418747_0002
>     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running
> in uber mode : false
>     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
> with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>     Job failed as tasks failed. failedMaps:0 failedReduces:1
>
>     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>         Job Counters
>             Failed reduce tasks=4
>             Launched reduce tasks=4
>             Total time spent by all maps in occupied slots (ms)=0
>             Total time spent by all reduces in occupied slots (ms)=11882
>             Total time spent by all reduce tasks (ms)=5941
>             Total vcore-seconds taken by all reduce tasks=5941
>             Total megabyte-seconds taken by all reduce tasks=3041792
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>
> I switched mapreduce.framework.name from:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> To:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>local</value>
> </property>
>
> and grep and other mapreduce jobs are working again.
>
> I don't understand why with "yarn" value in mapreduce.framework.name doesn't
> work, and without it (using "local") does.
>
> Any idea how to fix this without switching the value of
> mapreduce.framework.name?
>
>
>
> Bye!
> Jose
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hello José Luis Larroque.

Your problem here is only that grep is returning a non-zero exit code
when no occurences are found.
I know that for spark-streaming, using the option "-jobconf
stream.non.zero.exit.is.failure=false" solves the problem, but I don't
know how hadoop-mapreduce-examples-2.4.0.jar handles this.

Regards,
LLoyd

On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com> wrote:
> Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> and for some reason, i can't execute even a example that comes with map
> reduce (grep, wordcount, etc). With this line i execute grep:
>
>     $HADOOP_HOME/bin/yarn jar
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> grep input output2 'dfs[a-z.]+'
>
> This cluster was previosly running Giraph programs, but rigth now i need a
> Map Reduce application, so i switched it back to pure yarn.
>
> All failed containers had the same error:
>
>     Container: container_1452447718890_0001_01_000002 on localhost_37976
>     ======================================================================
>     LogType: stderr
>     LogLength: 45
>     Log Contents:
>     Error: Could not find or load main class 256
>
> Main logs:
>
>     SLF4J: Class path contains multiple SLF4J bindings.
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> process : 1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0001
>     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0001
>     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0001/
>     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> job_1452905418747_0001
>     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running
> in uber mode : false
>     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
> with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>     Job failed as tasks failed. failedMaps:1 failedReduces:0
>
>     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>         Job Counters
>             Failed map tasks=4
>             Launched map tasks=4
>             Other local map tasks=3
>             Data-local map tasks=1
>             Total time spent by all maps in occupied slots (ms)=15548
>             Total time spent by all reduces in occupied slots (ms)=0
>             Total time spent by all map tasks (ms)=7774
>             Total vcore-seconds taken by all map tasks=7774
>             Total megabyte-seconds taken by all map tasks=3980288
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> process : 0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0002
>     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0002
>     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0002/
>     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> job_1452905418747_0002
>     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running
> in uber mode : false
>     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
> with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>     Job failed as tasks failed. failedMaps:0 failedReduces:1
>
>     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>         Job Counters
>             Failed reduce tasks=4
>             Launched reduce tasks=4
>             Total time spent by all maps in occupied slots (ms)=0
>             Total time spent by all reduces in occupied slots (ms)=11882
>             Total time spent by all reduce tasks (ms)=5941
>             Total vcore-seconds taken by all reduce tasks=5941
>             Total megabyte-seconds taken by all reduce tasks=3041792
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>
> I switched mapreduce.framework.name from:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> To:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>local</value>
> </property>
>
> and grep and other mapreduce jobs are working again.
>
> I don't understand why with "yarn" value in mapreduce.framework.name doesn't
> work, and without it (using "local") does.
>
> Any idea how to fix this without switching the value of
> mapreduce.framework.name?
>
>
>
> Bye!
> Jose
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hello José Luis Larroque.

Your problem here is only that grep is returning a non-zero exit code
when no occurences are found.
I know that for spark-streaming, using the option "-jobconf
stream.non.zero.exit.is.failure=false" solves the problem, but I don't
know how hadoop-mapreduce-examples-2.4.0.jar handles this.

Regards,
LLoyd

On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com> wrote:
> Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> and for some reason, i can't execute even a example that comes with map
> reduce (grep, wordcount, etc). With this line i execute grep:
>
>     $HADOOP_HOME/bin/yarn jar
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> grep input output2 'dfs[a-z.]+'
>
> This cluster was previosly running Giraph programs, but rigth now i need a
> Map Reduce application, so i switched it back to pure yarn.
>
> All failed containers had the same error:
>
>     Container: container_1452447718890_0001_01_000002 on localhost_37976
>     ======================================================================
>     LogType: stderr
>     LogLength: 45
>     Log Contents:
>     Error: Could not find or load main class 256
>
> Main logs:
>
>     SLF4J: Class path contains multiple SLF4J bindings.
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> process : 1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0001
>     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0001
>     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0001/
>     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> job_1452905418747_0001
>     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running
> in uber mode : false
>     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
> with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>     Job failed as tasks failed. failedMaps:1 failedReduces:0
>
>     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>         Job Counters
>             Failed map tasks=4
>             Launched map tasks=4
>             Other local map tasks=3
>             Data-local map tasks=1
>             Total time spent by all maps in occupied slots (ms)=15548
>             Total time spent by all reduces in occupied slots (ms)=0
>             Total time spent by all map tasks (ms)=7774
>             Total vcore-seconds taken by all map tasks=7774
>             Total megabyte-seconds taken by all map tasks=3980288
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> process : 0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0002
>     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0002
>     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0002/
>     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> job_1452905418747_0002
>     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running
> in uber mode : false
>     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
> with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>     Job failed as tasks failed. failedMaps:0 failedReduces:1
>
>     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>         Job Counters
>             Failed reduce tasks=4
>             Launched reduce tasks=4
>             Total time spent by all maps in occupied slots (ms)=0
>             Total time spent by all reduces in occupied slots (ms)=11882
>             Total time spent by all reduce tasks (ms)=5941
>             Total vcore-seconds taken by all reduce tasks=5941
>             Total megabyte-seconds taken by all reduce tasks=3041792
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>
> I switched mapreduce.framework.name from:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> To:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>local</value>
> </property>
>
> and grep and other mapreduce jobs are working again.
>
> I don't understand why with "yarn" value in mapreduce.framework.name doesn't
> work, and without it (using "local") does.
>
> Any idea how to fix this without switching the value of
> mapreduce.framework.name?
>
>
>
> Bye!
> Jose
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Re: Can't run hadoop examples with YARN Single node cluster

Posted by Namikaze Minato <ll...@gmail.com>.
Hello José Luis Larroque.

Your problem here is only that grep is returning a non-zero exit code
when no occurences are found.
I know that for spark-streaming, using the option "-jobconf
stream.non.zero.exit.is.failure=false" solves the problem, but I don't
know how hadoop-mapreduce-examples-2.4.0.jar handles this.

Regards,
LLoyd

On 16 January 2016 at 19:07, José Luis Larroque <la...@gmail.com> wrote:
> Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
> and for some reason, i can't execute even a example that comes with map
> reduce (grep, wordcount, etc). With this line i execute grep:
>
>     $HADOOP_HOME/bin/yarn jar
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> grep input output2 'dfs[a-z.]+'
>
> This cluster was previosly running Giraph programs, but rigth now i need a
> Map Reduce application, so i switched it back to pure yarn.
>
> All failed containers had the same error:
>
>     Container: container_1452447718890_0001_01_000002 on localhost_37976
>     ======================================================================
>     LogType: stderr
>     LogLength: 45
>     Log Contents:
>     Error: Could not find or load main class 256
>
> Main logs:
>
>     SLF4J: Class path contains multiple SLF4J bindings.
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: Found binding in
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>     SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>     16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> process : 1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
>     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0001
>     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0001
>     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0001/
>     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> job_1452905418747_0001
>     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running
> in uber mode : false
>     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0001_m_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
>     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
> with state FAILED due to: Task failed task_1452905418747_0001_m_000000
>     Job failed as tasks failed. failedMaps:1 failedReduces:0
>
>     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
>         Job Counters
>             Failed map tasks=4
>             Launched map tasks=4
>             Other local map tasks=3
>             Data-local map tasks=1
>             Total time spent by all maps in occupied slots (ms)=15548
>             Total time spent by all reduces in occupied slots (ms)=0
>             Total time spent by all map tasks (ms)=7774
>             Total vcore-seconds taken by all map tasks=7774
>             Total megabyte-seconds taken by all map tasks=3980288
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>     16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
> hdnode01/192.168.0.10:8050
>     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
>     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> process : 0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
>     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_1452905418747_0002
>     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
>     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> application_1452905418747_0002
>     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1452905418747_0002/
>     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> job_1452905418747_0002
>     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running
> in uber mode : false
>     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
>     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_0, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_1, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> attempt_1452905418747_0002_r_000000_2, Status : FAILED
>     Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
>     org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
>         at org.apache.hadoop.util.Shell.run(Shell.java:418)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>         at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
>         at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>     Container exited with a non-zero exit code 1
>
>     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
>     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
> with state FAILED due to: Task failed task_1452905418747_0002_r_000000
>     Job failed as tasks failed. failedMaps:0 failedReduces:1
>
>     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
>         Job Counters
>             Failed reduce tasks=4
>             Launched reduce tasks=4
>             Total time spent by all maps in occupied slots (ms)=0
>             Total time spent by all reduces in occupied slots (ms)=11882
>             Total time spent by all reduce tasks (ms)=5941
>             Total vcore-seconds taken by all reduce tasks=5941
>             Total megabyte-seconds taken by all reduce tasks=3041792
>         Map-Reduce Framework
>             CPU time spent (ms)=0
>             Physical memory (bytes) snapshot=0
>             Virtual memory (bytes) snapshot=0
>
> I switched mapreduce.framework.name from:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>yarn</value>
> </property>
>
> To:
>
> <property>
> <name>mapreduce.framework.name</name>
> <value>local</value>
> </property>
>
> and grep and other mapreduce jobs are working again.
>
> I don't understand why with "yarn" value in mapreduce.framework.name doesn't
> work, and without it (using "local") does.
>
> Any idea how to fix this without switching the value of
> mapreduce.framework.name?
>
>
>
> Bye!
> Jose
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org