You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Nance, Keith" <kn...@smartronix.com> on 2015/05/19 22:35:02 UTC

SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by "Nance, Keith" <kn...@smartronix.com>.
Thank you Rohith for the reply and I wasn't exactly sure what to make of those log entries.  I had assumed that from Yarn's perspective, yes the container was setup and broken down completely.  I'll send this to the Spark user mailing list since it does make better sense to be in that forum.

Regards,
Keith

From: Rohith Sharma K S [mailto:rohithsharmaks@huawei.com]
Sent: Wednesday, May 20, 2015 2:06 AM
To: user@hadoop.apache.org
Subject: RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by "Nance, Keith" <kn...@smartronix.com>.
Thank you Rohith for the reply and I wasn't exactly sure what to make of those log entries.  I had assumed that from Yarn's perspective, yes the container was setup and broken down completely.  I'll send this to the Spark user mailing list since it does make better sense to be in that forum.

Regards,
Keith

From: Rohith Sharma K S [mailto:rohithsharmaks@huawei.com]
Sent: Wednesday, May 20, 2015 2:06 AM
To: user@hadoop.apache.org
Subject: RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by "Nance, Keith" <kn...@smartronix.com>.
Thank you Rohith for the reply and I wasn't exactly sure what to make of those log entries.  I had assumed that from Yarn's perspective, yes the container was setup and broken down completely.  I'll send this to the Spark user mailing list since it does make better sense to be in that forum.

Regards,
Keith

From: Rohith Sharma K S [mailto:rohithsharmaks@huawei.com]
Sent: Wednesday, May 20, 2015 2:06 AM
To: user@hadoop.apache.org
Subject: RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by "Nance, Keith" <kn...@smartronix.com>.
Thank you Rohith for the reply and I wasn't exactly sure what to make of those log entries.  I had assumed that from Yarn's perspective, yes the container was setup and broken down completely.  I'll send this to the Spark user mailing list since it does make better sense to be in that forum.

Regards,
Keith

From: Rohith Sharma K S [mailto:rohithsharmaks@huawei.com]
Sent: Wednesday, May 20, 2015 2:06 AM
To: user@hadoop.apache.org
Subject: RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>


RE: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

Posted by Rohith Sharma K S <ro...@huawei.com>.
Hi,

>From the ResourceManager log, it is very clear that Job has succeeded. There is no problem running Spark Applications.

2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED

But I do not have idea what does spark is doing internally.  Would you mind sending mail to spark user mailing lists.


Thanks & Regards
Rohith Sharma K S

From: Nance, Keith [mailto:knance@smartronix.com]
Sent: 20 May 2015 02:05
To: user@hadoop.apache.org
Subject: SparkPi fails with Job aborted due to stage failure: Task serialization failed:

All, unable to find any reference to my issue with spark.  Any ideas?  Thanks for any and all help
Attached are logs from the Spark job (SparkPi) results, Userlog, Nodemanager, and Resourcemanager.
###: SPARK JOB RESULTS :###
###########################
[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10
/home/testuser/spark/conf/spark-env.sh: line 55: -Dspark.history.kerberos.principal=spark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL<ma...@MALARD.LOCAL>: No such file or directory
Using properties file: /home/testuser/spark/conf/spark-defaults.conf
Adding default property: spark.broadcast.port=8004
Adding default property: spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
Adding default property: spark.akka.threads=1
Adding default property: spark.ui.port=4040
Adding default property: spark.driver.port=8001
Adding default property: spark.akka.heartbeat.interval=100
Adding default property: spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
Adding default property: spark.executor.port=8002
Adding default property: spark.logConf=true
Adding default property: spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.worker.ui.port=8081
Adding default property: spark.replClassServer.port=8006
Adding default property: spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
Adding default property: spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
Adding default property: spark.blockManager.port=8007
Adding default property: spark.yarn.am.waitTime=200000
Adding default property: spark.master=yarn-client
Adding default property: spark.yarn.preserve.staging.files=true
Adding default property: spark.fileserver.port=8003
Adding default property: spark.authenticate=true
Adding default property: spark.yarn.am.port=8008
Adding default property: spark.authenticate.secret=fubar
Adding default property: spark.master.ui.port=8080
Adding default property: spark.history.ui.port=18080
Adding default property: spark.eventlog.enabled=true
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          null
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/testuser/spark/conf/spark-defaults.conf
  driverMemory            null
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
 driverExtraJavaOptions  -Djava.net.preferIPv4Stack=true
  supervise               false
  queue                   null
  numExecutors            1
  files                   null
  pyFiles                 null
  archives                null
  mainClass               org.apache.spark.examples.SparkPi
  primaryResource         file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
  name                    org.apache.spark.examples.SparkPi
  childArgs               [10]
  jars                    null
  packages                null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
--conf and those from the properties file /home/testuser/spark/conf/spark-defaults.conf:
  spark.blockManager.port -> 8007
  spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
  spark.broadcast.port -> 8004
  spark.authenticate.secret -> fubar
  spark.authenticate -> true
  spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
  spark.logConf -> true
  spark.replClassServer.port -> 8006
  spark.history.ui.port -> 18080
  spark.fileserver.port -> 8003
  spark.ui.port -> 4040
  spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
  spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.yarn.am.waitTime -> 200000
  spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
  spark.worker.ui.port -> 8081
  spark.driver.port -> 8001
  spark.master -> yarn-client
  spark.yarn.preserve.staging.files -> true
  spark.yarn.am.port -> 8008
  spark.akka.heartbeat.interval -> 100
  spark.executor.port -> 8002
  spark.master.ui.port -> 8080
  spark.eventlog.enabled -> true
  spark.akka.threads -> 1


Main class:
org.apache.spark.examples.SparkPi
Arguments:
10
System properties:
spark.blockManager.port -> 8007
spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020
spark.broadcast.port -> 8004
spark.authenticate.secret -> fubar
spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.authenticate -> true
spark.executor.instances -> 1
spark.logConf -> true
spark.replClassServer.port -> 8006
spark.history.ui.port -> 18080
spark.fileserver.port -> 8003
SPARK_SUBMIT -> true
spark.ui.port -> 4040
spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.app.name -> org.apache.spark.examples.SparkPi
spark.driver.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.yarn.am.waitTime -> 200000
spark.yarn.am.extraJavaOptions -> -Djava.net.preferIPv4Stack=true
spark.master -> yarn-client
spark.worker.ui.port -> 8081
spark.driver.port -> 8001
spark.yarn.preserve.staging.files -> true
spark.yarn.am.port -> 8008
spark.akka.heartbeat.interval -> 100
spark.executor.port -> 8002
spark.executor.cores -> 1
spark.eventlog.enabled -> true
spark.master.ui.port -> 8080
spark.akka.threads -> 1
Classpath elements:
file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar


15/05/19 19:52:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/19 19:52:37 INFO spark.SparkContext: Spark configuration:
spark.akka.heartbeat.interval=100
spark.akka.threads=1
spark.app.name=Spark Pi
spark.authenticate=true
spark.authenticate.secret=fubar
spark.blockManager.port=8007
spark.broadcast.port=8004
spark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.driver.port=8001
spark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog
spark.eventlog.enabled=true
spark.executor.cores=1
spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances=1
spark.executor.port=8002
spark.fileserver.port=8003
spark.history.ui.port=18080
spark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar
spark.logConf=true
spark.master=yarn-client
spark.master.ui.port=8080
spark.replClassServer.port=8006
spark.ui.port=4040
spark.worker.ui.port=8081
spark.yarn.access.namenodes=hdfs://10.10.10.10:8020
spark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true
spark.yarn.am.port=8008
spark.yarn.am.waitTime=200000
spark.yarn.preserve.staging.files=true
15/05/19 19:52:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:52:39 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:39 INFO spark.SecurityManager: adding secret to credentials in yarn mode
15/05/19 19:52:39 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:40 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:52:41 INFO Remoting: Starting remoting
15/05/19 19:52:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:52:41 INFO util.Utils: Successfully started service 'sparkDriver' on port 8001.
15/05/19 19:52:41 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/19 19:52:41 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/19 19:52:41 INFO storage.DiskBlockManager: Created local directory at /scratch/spark-17902ac9-b400-4698-97c3-069d804a29e3/blockmgr-d95b3bdf-9c4d-4b48-97a5-4983dd2ab66d
15/05/19 19:52:41 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/19 19:52:42 INFO spark.HttpFileServer: HTTP File server directory is /scratch/spark-d2fb7948-5ab3-4f22-804c-7485d209bd3e/httpd-91938877-b371-4c0c-ba3d-bb7ee8ec4e09
15/05/19 19:52:42 INFO spark.HttpServer: Starting HTTP Server
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:8003<mailto:SocketConnector@0.0.0.0:8003>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'HTTP file server' on port 8003.
15/05/19 19:52:42 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/19 19:52:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/19 19:52:42 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040<mailto:SelectChannelConnector@0.0.0.0:4040>
15/05/19 19:52:42 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/19 19:52:42 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-10.ec2.internal:4040
15/05/19 19:52:43 INFO spark.SparkContext: Added JAR file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:8003/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1432065163724
15/05/19 19:52:44 INFO client.RMProxy: Connecting to ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032
15/05/19 19:52:45 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
15/05/19 19:52:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/19 19:52:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/19 19:52:45 INFO yarn.Client: Setting up container launch context for our AM
15/05/19 19:52:45 INFO yarn.Client: Preparing resources for our AM container
15/05/19 19:52:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
15/05/19 19:52:46 INFO yarn.Client: Uploading resource file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/19 19:52:52 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/19 19:52:52 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:52:52 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:52:52 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/19 19:52:53 INFO impl.YarnClientImpl: Submitted application application_1432064564266_0003
15/05/19 19:52:54 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:54 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:52:55 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:56 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:57 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:58 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:52:59 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:00 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:01 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:02 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:03 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:04 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:05 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:06 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:07 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:08 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:09 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:10 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:11 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:12 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:13 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:14 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:15 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:16 INFO yarn.Client: Application report for application_1432064564266_0003 (state: ACCEPTED)
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977/user/YarnAM#-1453228800]
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003), /proxy/application_1432064564266_0003
15/05/19 19:53:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/19 19:53:17 INFO yarn.Client: Application report for application_1432064564266_0003 (state: RUNNING)
15/05/19 19:53:17 INFO yarn.Client:
         client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
         diagnostics: N/A
         ApplicationMaster host: ip-10-10-128-10.ec2.internal
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1432065172758
         final status: UNDEFINED
         tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/
         user: testuser
15/05/19 19:53:17 INFO cluster.YarnClientSchedulerBackend: Application application_1432064564266_0003 has started running.
15/05/19 19:53:17 INFO netty.NettyBlockTransferService: Server created on 8007
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/05/19 19:53:17 INFO storage.BlockManagerMasterActor: Registering block manager ip-10-10-127-10.ec2.internal:8007 with 267.3 MB RAM, BlockManagerId(<driver>, ip-10-10-127-10.ec2.internal, 8007)
15/05/19 19:53:17 INFO storage.BlockManagerMaster: Registered BlockManager
15/05/19 19:53:18 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
15/05/19 19:53:18 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Missing parents: List()
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/05/19 19:53:18 INFO cluster.YarnScheduler: Cancelling stage 0
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) failed in Unknown s
15/05/19 19:53:18 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:35, took 0.258029 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
[testuser@ip-10-10-127-10 spark]$



###: SPARK JOB USERLOG RESULTS :###
###################################
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$ cat stderr
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/05/19 19:53:09 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/05/19 19:53:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/19 19:53:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1432064564266_0003_000001
15/05/19 19:53:14 INFO spark.SecurityManager: Changing view acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: Changing modify acls to: testuser
15/05/19 19:53:14 INFO spark.SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(testuser); users with modify permissions: Set(testuser)
15/05/19 19:53:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/19 19:53:15 INFO Remoting: Starting remoting
15/05/19 19:53:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977]
15/05/19 19:53:16 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 56977.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Driver now available: ip-10-10-127-10.ec2.internal:8001
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001/user/YarnScheduler
15/05/19 19:53:16 INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> ip-10-10-127-10.ec2.internal, PROXY_URI_BASES -> https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003),/proxy/application_1432064564266_0003)
15/05/19 19:53:16 INFO client.RMProxy: Connecting to ResourceManager at /10.10.127.10:8030
15/05/19 19:53:16 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/05/19 19:53:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/05/19 19:53:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/05/19 19:53:17 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/05/19 19:53:18 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Driver terminated or disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@ip-10-10-128-10.ec2.internal:56977] -> [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:8001]
15/05/19 19:53:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/05/19 19:53:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
[yarn@ip-10-10-128-10 container_1432064564266_0003_01_000001]$



###: YARN NODEMANAGER LOG RESULTS :###
######################################
2015-05-19 19:52:53,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:52:53,746 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,746 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1432064564266_0003_01_000001 by user testuser
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1432064564266_0003
2015-05-19 19:52:53,747 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Start Container Request       TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from NEW to INITING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1432064564266_0003_01_000001 to application application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from INITING to RUNNING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from NEW to LOCALIZING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1432064564266_0003
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING
2015-05-19 19:52:53,748 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1432064564266_0003_01_000001
2015-05-19 19:52:53,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/hadoop/tmp/yarn/nm-local-dir/nmPrivate/container_1432064564266_0003_01_000001.tokens. Credentials list:
2015-05-19 19:52:58,614 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testing (auth:SIMPLE)
2015-05-19 19:52:58,685 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testing (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB
2015-05-19 19:53:08,133 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1432064564266_0003/spark-assembly-1.3.1-hadoop2.6.0.jar(->/var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/filecache/13/spark-assembly-1.3.1-hadoop2.6.0.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-19 19:53:08,134 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZING to LOCALIZED
2015-05-19 19:53:08,217 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from LOCALIZED to RUNNING
2015-05-19 19:53:10,044 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1432064564266_0003_01_000001
2015-05-19 19:53:10,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 79.5 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:13,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 99.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:16,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 121.4 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,129 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 20620 for container-id container_1432064564266_0003_01_000001: 125.8 MB of 1 GB physical memory used; 1.1 GB of 2.1 GB virtual memory used
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1432064564266_0003_01_000001 succeeded
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
2015-05-19 19:53:19,402 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     OPERATION=Container Finished - Succeeded        TARGET=ContainerImpl    RESULT=SUCCESS     APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1432064564266_0003_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
2015-05-19 19:53:19,457 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1432064564266_0003_01_000001 from application application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1432064564266_0003
2015-05-19 19:53:19,458 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003/container_1432064564266_0003_01_000001
2015-05-19 19:53:20,428 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:20,432 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,432 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1432064564266_0003_01_000001
2015-05-19 19:53:20,432 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=testuser     IP=10.10.127.10 OPERATION=Stop Container Request        TARGET=ContainerManageImplRESULT=SUCCESS   APPID=application_1432064564266_0003    CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1432064564266_0003_01_000001]
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1432064564266_0003
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1432064564266_0003 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1432064564266_0003, with delay of 10800 seconds
2015-05-19 19:53:20,443 INFO org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Deleting absolute path : /var/hadoop/tmp/yarn/nm-local-dir/usercache/testuser/appcache/application_1432064564266_0003
2015-05-19 19:53:22,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1432064564266_0003_01_000001
root@ip-10-10-128-10:/var/log/hadoop>


###: YARN RESOURCE MANAGER LOGS :###
####################################
2015-05-19 19:52:45,408 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS)
2015-05-19 19:52:45,447 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for testuser@MALARD.LOCAL<ma...@MALARD.LOCAL> (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
2015-05-19 19:52:45,494 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 3
2015-05-19 19:52:52,758 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 3 submitted by user testuser
2015-05-19 19:52:52,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.127.10 OPERATION=Submit Application Request    TARGET=ClientRMService  RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:52:52,803 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: application_1432064564266_0003 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser)
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908], for application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 5 for testuser);exp=1432151572908 in 86399980 ms, appId = application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW to NEW_SAVING
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1432064564266_0003
2015-05-19 19:52:52,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from NEW_SAVING to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1432064564266_0003 from user: testuser, in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from SUBMITTED to ACCEPTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from NEW to SUBMITTED
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1432064564266_0003 from user: testuser activated in queue: default
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1432064564266_0003 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@72fa0d32<ma...@72fa0d32>, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-05-19 19:52:52,929 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1432064564266_0003_000001 to scheduler from user testuser in queue default
2015-05-19 19:52:52,930 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SUBMITTED to SCHEDULED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available after allocation
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2015-05-19 19:52:53,718 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:1024, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:52:53,719 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_1432064564266_0003_01_000001
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-05-19 19:52:53,720 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1432064564266_0003 AttemptId: appattempt_1432064564266_0003_000001 MasterContainer: Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ]
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-05-19 19:52:53,721 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-05-19 19:52:53,723 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1432064564266_0003_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.fileserver.uri=http://10.10.127.10:8003','-Dspark.broadcast.port=8004','-Dspark.executor.port=8002','-Dspark.driver.appUIAddress=http://ip-10-10-127-10.ec2.internal:4040','-Dspark.fileserver.port=8003','-Dspark.ui.port=4040','-Dspark.tachyonStore.folderName=spark-ea0a49a2-1643-4410-892d-690c62cb6857','-Dspark.driver.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.jars=file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar','-Dspark.app.name=Spark Pi','-Dspark.akka.threads=1','-Dspark.authenticate.secret=fubar','-Dspark.eventlog.enabled=true','-Dspark.driver.host=ip-10-10-127-10.ec2.internal','-Dspark.blockManager.port=8007','-Dspark.history.ui.port=18080','-Dspark.replClassServer.port=8006','-Dspark.worker.ui.port=8081','-Dspark.master=yarn-client','-Dspark.yarn.preserve.staging.files=true','-Dspark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers=\"one two three\"','-Dspark.yarn.access.namenodes=hdfs://10.10.10.10:8020','-Dspark.logConf=true','-Dspark.driver.port=8001','-Dspark.executor.id=<driver>','-Dspark.akka.heartbeat.interval=100','-Dspark.master.ui.port=8080','-Dspark.yarn.am.waitTime=200000','-Dspark.yarn.am.extraJavaOptions=-Djava.net.preferIPv4Stack=true','-Dspark.executor.instances=1','-Dspark.yarn.am.port=8008','-Dspark.eventlog.dir=hdfs://10.10.10.10:8020/user/testuser/spark/eventlog','-Dspark.executor.cores=1','-Dspark.authenticate=true','-Djava.net.preferIPv4Stack=true',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-10.ec2.internal:8001',--executor-memory,1024m,--executor-cores,1,--num-executors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,725 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] for AM appattempt_1432064564266_0003_000001
2015-05-19 19:52:53,750 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from ALLOCATED to LAUNCHED
2015-05-19 19:52:54,733 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-05-19 19:53:17,318 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1432064564266_0003_000001 (auth:SIMPLE)
2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user appattempt_1432064564266_0003_000001: id: appattempt_1432064564266_0003_000001: no such user

2015-05-19 19:53:17,376 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,376 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1432064564266_0003_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser IP=10.10.128.10 OPERATION=Register App Master   TARGET=ApplicationMasterService RESULT=SUCCESS     APPID=application_1432064564266_0003    APPATTEMPTID=appattempt_1432064564266_0003_000001
2015-05-19 19:53:17,377 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Setting client token master key
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from LAUNCHED to RUNNING
2015-05-19 19:53:17,378 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from ACCEPTED to RUNNING
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from NEW to ALLOCATED
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Allocated Container        TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1432064564266_0003_000001 container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=<memory:8192, vCores:8>
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=2
2015-05-19 19:53:18,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:3072, vCores:2> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1432064564266_0003_000001 with final state: FINISHING, and exit status: -1000
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1432064564266_0003 with final state: FINISHING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from RUNNING to FINAL_SAVING
2015-05-19 19:53:18,967 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1432064564266_0003
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:18,968 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINAL_SAVING to FINISHING
2015-05-19 19:53:19,076 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1432064564266_0003 unregistered successfully.
2015-05-19 19:53:20,383 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000001 in state: COMPLETED event:FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000001
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000001 of capacity <memory:1024, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available, release resources=true
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:2048, vCores:1> numContainers=1 user=testuser user-resources=<memory:2048, vCores:1>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9032 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=<memory:2048, vCores:1> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048, vCores:1>, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=1
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=1 available=6144 used=2048 with event: FINISHED
2015-05-19 19:53:20,384 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1432064564266_0003_000001 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1432064564266_0003 State change from FINISHING to FINISHED
2015-05-19 19:53:20,385 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1432064564266_0003_000001 is done. finalState=FINISHED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1432064564266_0003_01_000002 Container Transitioned from ALLOCATED to KILLED
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1432064564266_0003_01_000002 in state: KILLED event:KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1432064564266_0003       CONTAINERID=container_1432064564266_0003_01_000002
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1432064564266_0003_01_000002 of capacity <memory:2048, vCores:1> on host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=testuser user-resources=<memory:0, vCores:0>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1432064564266_0003_01_000002, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:8192, vCores:8>
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1432064564266_0003_000001 released container container_1432064564266_0003_01_000002 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=0 available=8192 used=0 with event: KILL
2015-05-19 19:53:20,387 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1432064564266_0003 requests cleared
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1432064564266_0003 user: testuser queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1432064564266_0003 user: testuser leaf-queue of parent: root #applications: 0
2015-05-19 19:53:20,388 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=testuser OPERATION=Application Finished - Succeeded      TARGET=RMAppManager     RESULT=SUCCESS     APPID=application_1432064564266_0003
2015-05-19 19:53:20,389 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1432064564266_0003,name=Spark Pi,user=testuser,queue=default,state=FINISHED,trackingUrl=https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1432064564266_0003/A,appMasterHost=ip-10-10-128-10.ec2.internal,startTime=1432065172758,finishTime=1432065198967,finalStatus=SUCCEEDED
2015-05-19 19:53:20,389 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 5 for testuser on 10.10.10.10:8020
2015-05-19 19:53:20,390 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1432064564266_0003_000001
2015-05-19 19:53:20,442 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
root@ip-10-10-127-10:/var/log/hadoop>

Keith Nance
Sr. Software Engineer
*Email: knance@smartronix.com<pb...@smartronix.com>
*    Cell: 808-343-0071
[cid:image002.jpg@01CA58DB.D44B0990]<http://www.smartronix.com/>
www.smartronix.com<http://www.smartronix.com>