You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by PashMic <pa...@gmail.com> on 2015/10/22 19:21:48 UTC

"java.io.IOException: Connection reset by peer" thrown on the resource manager when launching Spark on Yarn

Hi all,

I am trying to launch a Spark job using yarn-client mode on a cluster. I
have already tried spark-shell with yarn and I can launch the application.
But, I also would like to be able run the driver program from, say eclipse,
while using the cluster to run the tasks. I have also added spark-assembly
jar to HDFS and point to it by adding (HADOOP_CONF_DIR env variable) to
eclipse, although I'm not sure if that's the best way to go about this.

My application does launch on the cluster (as I can see it in the resource
manager's monitor) it finishes "successfully" but without any results coming
back to the driver. I see the following exception in eclipse console:

WARN  10:11:08,375  Logging.scala:71 -- Lost task 0.0 in stage 1.0 (TID 1,
vanpghdcn2.pgdev.sap.corp): java.lang.NullPointerException
	at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1.apply(ExistingRDD.scala:56)
	at
org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1.apply(ExistingRDD.scala:55)
	at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
	at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
	at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

ERROR 10:11:08,522  Logging.scala:75 -- Task 0 in stage 1.0 failed 4 times;
aborting job
INFO  10:11:08,538  SparkUtils.scala:67 --           SparkContext stopped


And I get the following in the ResourceManager log:

2015-10-22 10:08:57,126 WARN org.apache.hadoop.yarn.server.webapp.AppBlock:
Container with id 'container_1445462013958_0011_01_000001' doesn't exist in
RM.
2015-10-22 10:10:37,400 INFO
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new
applicationId: 12
2015-10-22 10:10:42,429 WARN
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific
max attempts: 0 for application: 12 is invalid, because it is out of the
range [1, 2]. Use the global max attempts instead.
2015-10-22 10:10:42,429 INFO
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application
with id 12 submitted by user hdfs
2015-10-22 10:10:42,429 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
IP=10.161.43.118	OPERATION=Submit Application Request	TARGET=ClientRMService
RESULT=SUCCESS	APPID=application_1445462013958_0012
2015-10-22 10:10:42,429 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing
application with id application_1445462013958_0012
2015-10-22 10:10:42,430 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from NEW to NEW_SAVING
2015-10-22 10:10:42,430 INFO
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing
info for app: application_1445462013958_0012
2015-10-22 10:10:42,430 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from NEW_SAVING to SUBMITTED
2015-10-22 10:10:42,431 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Application added - appId: application_1445462013958_0012 user: hdfs
leaf-queue of parent: root #applications: 1
2015-10-22 10:10:42,431 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Accepted application application_1445462013958_0012 from user: hdfs, in
queue: default
2015-10-22 10:10:42,432 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from SUBMITTED to ACCEPTED
2015-10-22 10:10:42,432 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
Registering app attempt : appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,432 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from NEW to SUBMITTED
2015-10-22 10:10:42,432 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Application application_1445462013958_0012 from user: hdfs activated in
queue: default
2015-10-22 10:10:42,433 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Application added - appId: application_1445462013958_0012 user:
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@666ee19c,
leaf-queue: default #user-pending-applications: 0 #user-active-applications:
1 #queue-pending-applications: 0 #queue-active-applications: 1
2015-10-22 10:10:42,433 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Added Application Attempt appattempt_1445462013958_0012_000001 to scheduler
from user hdfs in queue default
2015-10-22 10:10:42,433 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from SUBMITTED to
SCHEDULED
2015-10-22 10:10:42,602 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000001 Container Transitioned from NEW to
ALLOCATED
2015-10-22 10:10:42,603 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000001
2015-10-22 10:10:42,603 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1445462013958_0012_01_000001 of capacity
<memory:1024, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which has 1
containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7>
available after allocation
2015-10-22 10:10:42,603 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1445462013958_0012_000001
container=Container: [ContainerId: container_1445462013958_0012_01_000001,
NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority:
0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
absoluteUsedCapacity=0.0, numApps=1, numContainers=0
clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:42,603 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>,
usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1,
numContainers=1
2015-10-22 10:10:42,603 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.041666668
absoluteUsedCapacity=0.041666668 used=<memory:1024, vCores:1>
cluster=<memory:24576, vCores:24>
2015-10-22 10:10:42,604 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Sending NMToken for nodeId : vanpghdcn3.pgdev.sap.corp:41419 for container :
container_1445462013958_0012_01_000001
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000001 Container Transitioned from ALLOCATED
to ACQUIRED
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Clear node set for appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
Storing attempt: AppId: application_1445462013958_0012 AttemptId:
appattempt_1445462013958_0012_000001 MasterContainer: Container:
[ContainerId: container_1445462013958_0012_01_000001, NodeId:
vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority:
0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from SCHEDULED to
ALLOCATED_SAVING
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from ALLOCATED_SAVING to
ALLOCATED
2015-10-22 10:10:42,606 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Launching masterappattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting
up container Container: [ContainerId:
container_1445462013958_0012_01_000001, NodeId:
vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority:
0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
for AM appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command
to launch container container_1445462013958_0012_01_000001 :
{{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.eventLog.dir=','-Dspark.driver.port=57819','-Dspark.app.name=Sparca
Application','-Dspark.executor.memory=1g','-Dspark.master=yarn-client','-Dspark.executor.id=driver','-Dspark.externalBlockStore.folderName=spark-10391661-8d35-40d9-8242-fe79bdc19d2d','-Dspark.fileserver.uri=http://10.161.43.118:57820','-Dspark.driver.appUIAddress=http://10.161.43.118:4040','-Dspark.driver.host=10.161.43.118','-Dspark.eventLog.enabled=false','-Dspark.yarn.jar=hdfs://vanpghdcn1.pgdev.sap.corp:8020/data/spark-assembly-1.4.0-hadoop2.6.0.jar','-Dspark.cores.max=6',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'10.161.43.118:57819',--executor-memory,1024m,--executor-cores,1,--num-executors
,2,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-10-22 10:10:42,608 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager:
Create AMRMToken for ApplicationAttempt:
appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager:
Creating password for appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,640 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done
launching container Container: [ContainerId:
container_1445462013958_0012_01_000001, NodeId:
vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority:
0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
for AM appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,640 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from ALLOCATED to LAUNCHED
2015-10-22 10:10:43,613 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000001 Container Transitioned from ACQUIRED
to RUNNING
2015-10-22 10:10:48,176 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1445462013958_0012_000001 (auth:SIMPLE)
2015-10-22 10:10:48,188 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM
registration appattempt_1445462013958_0012_000001
2015-10-22 10:10:48,188 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
IP=10.165.28.145	OPERATION=Register App Master
TARGET=ApplicationMasterService	RESULT=SUCCESS
APPID=application_1445462013958_0012
APPATTEMPTID=appattempt_1445462013958_0012_000001
2015-10-22 10:10:48,188 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from LAUNCHED to RUNNING
2015-10-22 10:10:48,188 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from ACCEPTED to RUNNING
2015-10-22 10:10:48,632 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000002 Container Transitioned from NEW to
ALLOCATED
2015-10-22 10:10:48,632 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000002
2015-10-22 10:10:48,632 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1445462013958_0012_01_000002 of capacity
<memory:2048, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which has 2
containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6>
available after allocation
2015-10-22 10:10:48,632 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1445462013958_0012_000001
container=Container: [ContainerId: container_1445462013958_0012_01_000002,
NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority:
1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668,
absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1
clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:48,633 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>,
usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2
2015-10-22 10:10:48,633 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125
used=<memory:3072, vCores:2> cluster=<memory:24576, vCores:24>
2015-10-22 10:10:48,819 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000003 Container Transitioned from NEW to
ALLOCATED
2015-10-22 10:10:48,819 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000003
2015-10-22 10:10:48,819 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Assigned container container_1445462013958_0012_01_000003 of capacity
<memory:2048, vCores:1> on host vanpghdcn2.pgdev.sap.corp:36064, which has 1
containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7>
available after allocation
2015-10-22 10:10:48,819 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignedContainer application attempt=appattempt_1445462013958_0012_000001
container=Container: [ContainerId: container_1445462013958_0012_01_000003,
NodeId: vanpghdcn2.pgdev.sap.corp:36064, NodeHttpAddress:
vanpghdcn2.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority:
1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:3072, vCores:2>, usedCapacity=0.125,
absoluteUsedCapacity=0.125, numApps=1, numContainers=2
clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:48,819 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting assigned queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:5120, vCores:3>,
usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333, numApps=1,
numContainers=3
2015-10-22 10:10:48,820 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.20833333
absoluteUsedCapacity=0.20833333 used=<memory:5120, vCores:3>
cluster=<memory:24576, vCores:24>
2015-10-22 10:10:53,253 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Sending NMToken for nodeId : vanpghdcn3.pgdev.sap.corp:41419 for container :
container_1445462013958_0012_01_000002
2015-10-22 10:10:53,255 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000002 Container Transitioned from ALLOCATED
to ACQUIRED
2015-10-22 10:10:53,256 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM:
Sending NMToken for nodeId : vanpghdcn2.pgdev.sap.corp:36064 for container :
container_1445462013958_0012_01_000003
2015-10-22 10:10:53,257 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000003 Container Transitioned from ALLOCATED
to ACQUIRED
2015-10-22 10:10:53,643 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000002 Container Transitioned from ACQUIRED
to RUNNING
2015-10-22 10:10:53,830 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000003 Container Transitioned from ACQUIRED
to RUNNING
2015-10-22 10:10:58,282 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo:
checking for deactivate of application :application_1445462013958_0012
2015-10-22 10:11:08,349 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
Updating application attempt appattempt_1445462013958_0012_000001 with final
state: FINISHING, and exit status: -1000
2015-10-22 10:11:08,349 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from RUNNING to
FINAL_SAVING
2015-10-22 10:11:08,349 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating
application application_1445462013958_0012 with final state: FINISHING
2015-10-22 10:11:08,349 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from RUNNING to FINAL_SAVING
2015-10-22 10:11:08,350 INFO
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
Updating info for app: application_1445462013958_0012
2015-10-22 10:11:08,350 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from FINAL_SAVING to
FINISHING
2015-10-22 10:11:08,350 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from FINAL_SAVING to FINISHING
2015-10-22 10:11:08,453 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
application_1445462013958_0012 unregistered successfully. 
2015-10-22 10:11:08,692 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000002 Container Transitioned from RUNNING
to COMPLETED
2015-10-22 10:11:08,692 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1445462013958_0012_01_000002 in state:
COMPLETED event:FINISHED
2015-10-22 10:11:08,692 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000002
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1445462013958_0012_01_000002 of capacity
<memory:2048, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which
currently has 1 containers, <memory:1024, vCores:1> used and <memory:7168,
vCores:7> available, release resources=true
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:3072, vCores:2> numContainers=2 user=hdfs
user-resources=<memory:3072, vCores:2>
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1445462013958_0012_01_000002, NodeId:
vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority:
1, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:3072, vCores:2>, usedCapacity=0.125,
absoluteUsedCapacity=0.125, numApps=1, numContainers=2
cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125
used=<memory:3072, vCores:2> cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>,
usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2
2015-10-22 10:11:08,693 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1445462013958_0012_000001 released container
container_1445462013958_0012_01_000002 on node: host:
vanpghdcn3.pgdev.sap.corp:41419 #containers=1 available=<memory:7168,
vCores:7> used=<memory:1024, vCores:1> with event: FINISHED
2015-10-22 10:11:08,704 INFO org.apache.hadoop.ipc.Server: Socket Reader #1
for port 8050: readAndProcess from client 10.161.43.118 threw exception
[java.io.IOException: Connection reset by peer]
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:197)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
	at org.apache.hadoop.ipc.Server.channelRead(Server.java:2603)
	at org.apache.hadoop.ipc.Server.access$2800(Server.java:136)
	at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1481)
	at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771)
	at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637)
	at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608)
2015-10-22 10:11:08,920 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000003 Container Transitioned from RUNNING
to COMPLETED
2015-10-22 10:11:08,920 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1445462013958_0012_01_000003 in state:
COMPLETED event:FINISHED
2015-10-22 10:11:08,920 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000003
2015-10-22 10:11:08,920 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1445462013958_0012_01_000003 of capacity
<memory:2048, vCores:1> on host vanpghdcn2.pgdev.sap.corp:36064, which
currently has 0 containers, <memory:0, vCores:0> used and <memory:8192,
vCores:8> available, release resources=true
2015-10-22 10:11:08,920 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:1024, vCores:1> numContainers=1 user=hdfs
user-resources=<memory:1024, vCores:1>
2015-10-22 10:11:08,921 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1445462013958_0012_01_000003, NodeId:
vanpghdcn2.pgdev.sap.corp:36064, NodeHttpAddress:
vanpghdcn2.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority:
1, Token: Token { kind: ContainerToken, service: 10.165.28.143:36064 }, ]
queue=default: capacity=1.0, absoluteCapacity=1.0,
usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668,
absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1
cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,921 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.041666668
absoluteUsedCapacity=0.041666668 used=<memory:1024, vCores:1>
cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,921 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>,
usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1,
numContainers=1
2015-10-22 10:11:08,921 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1445462013958_0012_000001 released container
container_1445462013958_0012_01_000003 on node: host:
vanpghdcn2.pgdev.sap.corp:36064 #containers=0 available=<memory:8192,
vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2015-10-22 10:11:09,694 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
container_1445462013958_0012_01_000001 Container Transitioned from RUNNING
to COMPLETED
2015-10-22 10:11:09,694 INFO
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
Unregistering app attempt : appattempt_1445462013958_0012_000001
2015-10-22 10:11:09,694 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp:
Completed container: container_1445462013958_0012_01_000001 in state:
COMPLETED event:FINISHED
2015-10-22 10:11:09,694 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS
APPID=application_1445462013958_0012
CONTAINERID=container_1445462013958_0012_01_000001
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode:
Released container container_1445462013958_0012_01_000001 of capacity
<memory:1024, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which
currently has 0 containers, <memory:0, vCores:0> used and <memory:8192,
vCores:8> available, release resources=true
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
default used=<memory:0, vCores:0> numContainers=0 user=hdfs
user-resources=<memory:0, vCores:0>
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
completedContainer container=Container: [ContainerId:
container_1445462013958_0012_01_000001, NodeId:
vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress:
vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority:
0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1,
numContainers=0 cluster=<memory:24576, vCores:24>
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0
used=<memory:0, vCores:0> cluster=<memory:24576, vCores:24>
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Re-sorting completed queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0,
absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-10-22 10:11:09,695 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application attempt appattempt_1445462013958_0012_000001 released container
container_1445462013958_0012_01_000001 on node: host:
vanpghdcn3.pgdev.sap.corp:41419 #containers=0 available=<memory:8192,
vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2015-10-22 10:11:09,694 INFO
org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager:
Application finished, removing password for
appattempt_1445462013958_0012_000001
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
appattempt_1445462013958_0012_000001 State change from FINISHING to FINISHED
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
application_1445462013958_0012 State change from FINISHING to FINISHED
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Application Attempt appattempt_1445462013958_0012_000001 is done.
finalState=FINISHED
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
OPERATION=Application Finished - Succeeded	TARGET=RMAppManager
RESULT=SUCCESS	APPID=application_1445462013958_0012
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo:
Application application_1445462013958_0012 requests cleared
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Application removed - appId: application_1445462013958_0012 user: hdfs
queue: default #user-pending-applications: 0 #user-active-applications: 0
#queue-pending-applications: 0 #queue-active-applications: 0
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Application removed - appId: application_1445462013958_0012 user: hdfs
leaf-queue of parent: root #applications: 0
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary:
appId=application_1445462013958_0012,name=Sparca
Application,user=hdfs,queue=default,state=FINISHED,trackingUrl=http://vanpghdcn1:8088/proxy/application_1445462013958_0012/,appMasterHost=10.165.28.145,startTime=1445533842429,finishTime=1445533868349,finalStatus=SUCCEEDED,memorySeconds=109990,vcoreSeconds=67,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\,
vCores:0>,applicationType=SPARK
2015-10-22 10:11:09,696 INFO
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
Cleaning master appattempt_1445462013958_0012_000001
2015-10-22 10:11:10,719 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Null container completed...
2015-10-22 10:11:10,925 INFO
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Null container completed...



It's worth mentioning that 10.161.43.118 is the machine that I'm running my
eclipse on. And my test app is just reading a csv into a dataframe and doing
a count.

Thanks



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-io-IOException-Connection-reset-by-peer-thrown-on-the-resource-manager-when-launching-Spark-on-n-tp25165.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org