You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "Nance, Keith" <kn...@smartronix.com> on 2015/05/13 22:45:34 UTC

Spark job fails during runtime? Container exited with a non-zero exit code 10

Hi Community,

Facing the following issue...Trying to run a simple SparkPi job and it fails with an exit code 10.  The below items are the log results of the spark job, nodemanager, and resourcemanager.  Thank you for your time and consideration.

[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 54: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.logConf=true
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.master=yarn-client
Adding default property: spark.authenticate=true
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.logConf -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.authenticate -> true
  spark.serializer -> org.apache.spark.serializer.KryoSerializer
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.master -> yarn-client
  spark.eventlog.enabled -> true


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.executor.instances -> 1
spark.logConf -> true
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
SPARK_SUBMIT -> true
spark.serializer -> org.apache.spark.serializer.KryoSerializer
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.master -> yarn-client
spark.executor.cores -> 1
spark.eventlog.enabled -> true
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/13 02:03:58 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/13 02:03:58 INFO spark.SparkContext: Spark configuration:
spark.app.name=Spark Pi
spark.authenticate=true
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
15/05/13 02:03:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/13 02:04:00 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/13 02:04:00 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/13 02:04:00 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/13 02:04:00 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/13 02:04:02 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/13 02:04:02 INFO Remoting: Starting remoting
15/05/13 02:04:02 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:48035]
15/05/13 02:04:02 INFO util.Utils: Successfully started service 'sparkDriver' on port 48035.
15/05/13 02:04:02 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/13 02:04:02 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/13 02:04:03 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-ff55f4f2-184a-4619-bc95-6913ba76ce30/blockmgr-64dc623c-7b7b-4a2d-8591-5f2dd55a3bb3
15/05/13 02:04:03 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/13 02:04:03 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-2912740b-d416-4642-b0c2-8ee17ca602f1/httpd-6c162e1a-61e0-46ae-979c-688339e6e462
15/05/13 02:04:03 INFO spark.HttpServer: Starting HTTP Server
15/05/13 02:04:03 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/13 02:04:03 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:48052
15/05/13 02:04:03 INFO util.Utils: Successfully started service 'HTTP file server' on port 48052.
15/05/13 02:04:03 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/13 02:04:03 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/13 02:04:04 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/05/13 02:04:04 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/13 02:04:04 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/13 02:04:04 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:48052/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1431482644936
15/05/13 02:04:05 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/13 02:04:06 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/13 02:04:06 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/13 02:04:06 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/13 02:04:06 INFO yarn.Client: Setting up container launch context for our AM
15/05/13 02:04:06 INFO yarn.Client: Preparing resources for our AM container
15/05/13 02:04:08 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 4 for testuser on 10.10.10.10:8020
15/05/13 02:04:08 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1431482563856_0001/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/13 02:04:13 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/13 02:04:13 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/13 02:04:13 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/13 02:04:13 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/13 02:04:13 INFO yarn.Client: Submitting application 1 to ResourceManager
15/05/13 02:04:16 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1431482563856_0001 is still in NEW
15/05/13 02:04:17 INFO impl.YarnClientImpl: Submitted application application_1431482563856_0001
15/05/13 02:04:18 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
15/05/13 02:04:18 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1431482654058
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1431482563856_0001/
         user: testuser
15/05/13 02:04:19 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
15/05/13 02:04:20 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
15/05/13 02:04:21 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
...(TRUNCATED)...
15/05/13 02:09:04 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
15/05/13 02:09:05 INFO yarn.Client: Application report for application_1431482563856_0001 (state: ACCEPTED)
15/05/13 02:09:06 INFO yarn.Client: Application report for application_1431482563856_0001 (state: FAILED)
15/05/13 02:09:06 INFO yarn.Client:
         client token: N/A
         diagnostics: Application application_1431482563856_0001 failed 2 times due to AM Container for appattempt_1431482563856_0001_000002 exited with  exitCode: 10
For more detailed output, check application tracking page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1431482563856_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431482563856_0001_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : user is testuser
main : requested yarn user is testuser


Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1431482654058
         final status: FAILED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/cluster/app/application_1431482563856_0001
         user: testuser
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:113)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:381)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
[testuser@ip-10-10-127-10 spark]$


###: NODEMANAGER :###
2015-05-13 02:04:17,868 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1431482563856_0001_000001 (auth:SIMPLE)
2015-05-13 02:04:18,014 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1431482563856_0001_000001: id: appattempt_1431482563856_0001_000001: no such user

2015-05-13 02:04:18,015 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1431482563856_0001_000001
2015-05-13 02:04:18,015 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1431482563856_0001_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-13 02:04:18,192 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1431482563856_0001_01_000001 by user testuser
2015-05-13 02:04:18,255 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1431482563856_0001
2015-05-13 02:04:18,271 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431482563856_0001 transitioned from NEW to INITING
2015-05-13 02:04:18,271 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1431482563856_0001_01_000001 to application application_1431482563856_0001
2015-05-13 02:04:18,290 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImpl RESULT=SUCCESS   APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_01_000001
2015-05-13 02:04:18,308 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431482563856_0001 transitioned from INITING to RUNNING
2015-05-13 02:04:18,312 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_01_000001 transitioned from NEW to LOCALIZING
2015-05-13 02:04:18,312 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1431482563856_0001
2015-05-13 02:04:18,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1431482563856_0001/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-13 02:04:18,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1431482563856_0001_01_000001
2015-05-13 02:04:18,885 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1431482563856_0001_01_000001.tokens. Credentials list:
2015-05-13 02:04:23,698 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-13 02:04:24,028 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-13 02:04:33,484 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1431482563856_0001/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/10/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-13 02:04:33,488 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-13 02:04:33,605 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-13 02:04:34,557 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1431482563856_0001_01_000001
2015-05-13 02:04:34,904 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 59.3 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:38,107 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 100.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:41,182 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 116.2 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:44,210 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:47,235 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:50,260 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:53,291 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:56,319 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:04:59,343 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:02,391 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:05,414 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:08,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:11,488 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:14,514 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.6 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:17,535 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:20,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:23,586 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:26,607 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:29,620 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:32,643 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:35,676 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:38,701 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:41,755 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:44,792 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:47,816 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:50,841 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:53,866 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:56,891 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:05:59,922 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:02,934 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:05,954 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:08,966 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:11,985 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:15,030 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:18,065 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:21,089 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.7 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:24,114 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:27,128 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:30,149 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:33,191 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:36,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:39,263 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 115.8 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:42,273 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 117.0 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:45,283 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 117.0 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:48,293 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23815 for container-id container_1431482563856_0001_01_000001: 117.0 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
2015-05-13 02:06:49,029 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code from container container_1431482563856_0001_01_000001 is : 10
2015-05-13 02:06:49,030 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exception from container-launch with container ID: container_1431482563856_0001_01_000001 and exit code: 10
ExitCodeException exitCode=10:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from container-launch.
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Container id: container_1431482563856_0001_01_000001
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exit code: 10
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Stack trace: ExitCodeException exitCode=10:
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.util.Shell.run(Shell.java:455)
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
2015-05-13 02:06:49,032 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:       at java.lang.Thread.run(Thread.java:745)
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Shell output: main : command provided 1
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: main : user is testuser
2015-05-13 02:06:49,033 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: main : requested yarn user is testuser
2015-05-13 02:06:49,033 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero exit code 10
2015-05-13 02:06:49,034 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2015-05-13 02:06:49,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1431482563856_0001_01_000001
2015-05-13 02:06:49,142 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Failed   TARGET=ContainerImpl    RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE        APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_01_000001
2015-05-13 02:06:49,144 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_01_000001 transitioned from EXITED_WITH_FAILURE to DONE
2015-05-13 02:06:49,144 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1431482563856_0001_01_000001 from application application_1431482563856_0001
2015-05-13 02:06:49,144 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1431482563856_0001/container_1431482563856_0001_01_000001
2015-05-13 02:06:49,144 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1431482563856_0001
2015-05-13 02:06:50,717 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1431482563856_0001_01_000001]
2015-05-13 02:06:50,734 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1431482563856_0001_000002 (auth:SIMPLE)
2015-05-13 02:06:50,744 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1431482563856_0001_000002: id: appattempt_1431482563856_0001_000002: no such user

2015-05-13 02:06:50,744 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,744 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1431482563856_0001_000002 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-13 02:06:50,746 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1431482563856_0001_02_000001 by user testuser
2015-05-13 02:06:50,746 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1431482563856_0001_02_000001 to application application_1431482563856_0001
2015-05-13 02:06:50,746 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_02_000001 transitioned from NEW to LOCALIZING
2015-05-13 02:06:50,746 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1431482563856_0001
2015-05-13 02:06:50,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_02_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-13 02:06:50,746 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImpl RESULT=SUCCESS   APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_02_000001
2015-05-13 02:06:50,824 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431482563856_0001_02_000001 transitioned from LOCALIZED to RUNNING
2015-05-13 02:06:51,293 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1431482563856_0001_02_000001
2015-05-13 02:06:51,293 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1431482563856_0001_01_000001
2015-05-13 02:06:51,362 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 23874 for container-id container_1431482563856_0001_02_000001: 43.5 MB of 1 GB physical memory used; 1.2 GB of 2.1 GB virtual memory used
[yarn@ip-10-10-128-10 hadoop]$



###: RESOURCE MANAGER :###
2015-05-13 02:03:34,179 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL (auth:KERBEROS)
2015-05-13 02:03:34,272 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-13 02:04:06,728 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL (auth:KERBEROS)
2015-05-13 02:04:06,776 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-13 02:04:06,865 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2015-05-13 02:04:14,058 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 1 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-13 02:04:14,073 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user testuser
2015-05-13 02:04:14,074 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService      RESULT=SUCCESS  APPID=application_1431482563856_0001
2015-05-13 02:04:14,396 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1431482563856_0001 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 4 for testuser)
2015-05-13 02:04:16,914 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 4 for testuser);exp=1431569056547], for application_1431482563856_0001
2015-05-13 02:04:16,914 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 4 for testuser);exp=1431569056547 in 86399633 ms, appId = application_1431482563856_0001
2015-05-13 02:04:16,914 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1431482563856_0001
2015-05-13 02:04:16,916 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1431482563856_0001 State change from NEW to NEW_SAVING
2015-05-13 02:04:16,922 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1431482563856_0001
2015-05-13 02:04:16,922 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1431482563856_0001 State change from NEW_SAVING to SUBMITTED
2015-05-13 02:04:16,924 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1431482563856_0001 user: testuser leaf-queue of parent: root #applications: 1
2015-05-13 02:04:16,924 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1431482563856_0001 from user: testuser, in queue: default
2015-05-13 02:04:16,926 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1431482563856_0001 State change from SUBMITTED to ACCEPTED
2015-05-13 02:04:16,980 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1431482563856_0001_000001
2015-05-13 02:04:16,981 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from NEW to SUBMITTED
2015-05-13 02:04:17,000 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1431482563856_0001 from user: testuser activated in queue: default
2015-05-13 02:04:17,000 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1431482563856_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@2859fcb7, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-13 02:04:17,001 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1431482563856_0001_000001 to scheduler from user testuser in queue default
2015-05-13 02:04:17,002 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from SUBMITTED to SCHEDULED
2015-05-13 02:04:17,514 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-13 02:04:17,514 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS      APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_01_000001
2015-05-13 02:04:17,514 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1431482563856_0001_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-13 02:04:17,515 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1431482563856_0001_000001 container=Container: [ContainerId: container_1431482563856_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-13 02:04:17,515 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-13 02:04:17,515 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-13 02:04:17,544 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1431482563856_0001_01_000001
2015-05-13 02:04:17,564 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-13 02:04:17,565 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1431482563856_0001_000001
2015-05-13 02:04:17,567 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1431482563856_0001 AttemptId: appattempt_1431482563856_0001_000001 MasterContainer: Container: [ContainerId: container_1431482563856_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-13 02:04:17,568 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-13 02:04:17,588 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-13 02:04:17,590 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1431482563856_0001_000001
2015-05-13 02:04:17,632 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1431482563856_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1431482563856_0001_000001
2015-05-13 02:04:17,632 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1431482563856_0001_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.tachyonStore.folderName=spark-2f93616f-d7d5-4792-832c-71e671bc1bb8','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.driver.port=48035','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.master=yarn-client','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.serializer=org.apache.spark.serializer.KryoSerializer','-Dspark.executor.id=<driver>','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=1','-Dspark.app.name=Spark Pi','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.eventlog.enabled=true','-Dspark.fileserver.uri=http://10.10.127.10:48052','-Dspark.authenticate=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:48035',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-13 02:04:17,632 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1431482563856_0001_000001
2015-05-13 02:04:17,635 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1431482563856_0001_000001
2015-05-13 02:04:18,333 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1431482563856_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1431482563856_0001_000001
2015-05-13 02:04:18,333 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from ALLOCATED to LAUNCHED
2015-05-13 02:04:18,538 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-13 02:06:49,702 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-13 02:06:49,702 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1431482563856_0001_01_000001 in state: COMPLETED event:FINISHED
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS      APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_01_000001
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1431482563856_0001_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1431482563856_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-13 02:06:49,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1431482563856_0001_000001 released container container_1431482563856_0001_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: FINISHED
2015-05-13 02:06:49,717 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1431482563856_0001_000001 with final state: FAILED, and exit status: 10
2015-05-13 02:06:49,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from LAUNCHED to FINAL_SAVING
2015-05-13 02:06:49,718 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1431482563856_0001_000001
2015-05-13 02:06:49,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1431482563856_0001_000001
2015-05-13 02:06:49,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000001 State change from FINAL_SAVING to FAILED
2015-05-13 02:06:49,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1431482563856_0001_000002
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from NEW to SUBMITTED
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1431482563856_0001_000001 is done. finalState=FAILED
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1431482563856_0001 requests cleared
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1431482563856_0001 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1431482563856_0001 from user: testuser activated in queue: default
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1431482563856_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72c32338, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-13 02:06:49,720 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1431482563856_0001_000002 to scheduler from user testuser in queue default
2015-05-13 02:06:49,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from SUBMITTED to SCHEDULED
2015-05-13 02:06:50,705 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2015-05-13 02:06:50,705 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_02_000001 Container Transitioned from NEW to ALLOCATED
2015-05-13 02:06:50,705 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS      APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_02_000001
2015-05-13 02:06:50,706 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1431482563856_0001_02_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-13 02:06:50,706 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1431482563856_0001_000002 container=Container: [ContainerId: container_1431482563856_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-13 02:06:50,706 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-13 02:06:50,706 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-13 02:06:50,718 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1431482563856_0001_02_000001
2015-05-13 02:06:50,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-13 02:06:50,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1431482563856_0001 AttemptId: appattempt_1431482563856_0001_000002 MasterContainer: Container: [ContainerId: container_1431482563856_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-13 02:06:50,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-13 02:06:50,719 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-13 02:06:50,720 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1431482563856_0001_000002
2015-05-13 02:06:50,722 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1431482563856_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,722 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1431482563856_0001_02_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.tachyonStore.folderName=spark-2f93616f-d7d5-4792-832c-71e671bc1bb8','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.driver.port=48035','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.master=yarn-client','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.serializer=org.apache.spark.serializer.KryoSerializer','-Dspark.executor.id=<driver>','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=1','-Dspark.app.name=Spark Pi','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.eventlog.enabled=true','-Dspark.fileserver.uri=http://10.10.127.10:48052','-Dspark.authenticate=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:48035',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-13 02:06:50,722 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,722 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,768 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1431482563856_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1431482563856_0001_000002
2015-05-13 02:06:50,768 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from ALLOCATED to LAUNCHED
2015-05-13 02:06:51,723 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_02_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-13 02:09:06,449 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1431482563856_0001_02_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-13 02:09:06,449 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1431482563856_0001_02_000001 in state: COMPLETED event:FINISHED
2015-05-13 02:09:06,449 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS      APPID=application_1431482563856_0001    CONTAINERID=container_1431482563856_0001_02_000001
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1431482563856_0001_02_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1431482563856_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1431482563856_0001_000002 released container container_1431482563856_0001_02_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: FINISHED
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1431482563856_0001_000002 with final state: FAILED, and exit status: 10
2015-05-13 02:09:06,450 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from LAUNCHED to FINAL_SAVING
2015-05-13 02:09:06,459 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1431482563856_0001_000002
2015-05-13 02:09:06,459 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1431482563856_0001_000002
2015-05-13 02:09:06,460 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1431482563856_0001_000002 State change from FINAL_SAVING to FAILED
2015-05-13 02:09:06,460 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2015-05-13 02:09:06,460 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1431482563856_0001 with final state: FAILED
2015-05-13 02:09:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1431482563856_0001 State change from ACCEPTED to FINAL_SAVING
2015-05-13 02:09:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1431482563856_0001
2015-05-13 02:09:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1431482563856_0001_000002 is done. finalState=FAILED
2015-05-13 02:09:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1431482563856_0001 requests cleared
2015-05-13 02:09:06,461 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1431482563856_0001 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-13 02:09:06,462 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1431482563856_0001 failed 2 times due to AM Container for appattempt_1431482563856_0001_000002 exited with  exitCode: 10
For more detailed output, check application tracking page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1431482563856_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431482563856_0001_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : user is testuser
main : requested yarn user is testuser


Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
2015-05-13 02:09:06,463 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1431482563856_0001 State change from FINAL_SAVING to FAILED
2015-05-13 02:09:06,464 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1431482563856_0001 user: testuser leaf-queue of parent: root #applications: 0
2015-05-13 02:09:06,464 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Failed TARGET=RMAppManager     RESULT=FAILURE      DESCRIPTION=App failed with state: FAILED       PERMISSIONS=Application application_1431482563856_0001 failed 2 times due to AM Container for appattempt_1431482563856_0001_000002 exited with  exitCode: 10
For more detailed output, check application tracking page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1431482563856_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1431482563856_0001_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : user is testuser
main : requested yarn user is testuser


Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.  APPID=application_1431482563856_0001
2015-05-13 02:09:06,466 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1431482563856_0001,name=Spark Pi,user=testuser,queue=default,state=FAILED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/cluster/app/application_1431482563856_0001,appMasterHost=N/A,startTime=1431482654058,finishTime=1431482946460,finalStatus=FAILED
2015-05-13 02:09:06,492 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 4 for testuser on 10.10.10.10:8020
2015-05-13 02:09:07,463 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2015-05-13 02:12:43,830 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: Release request cache is cleaned up
[yarn@ip-10-10-127-10 hadoop]$

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>



Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>