You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Li Li <fa...@gmail.com> on 2016/10/20 10:51:45 UTC

spark pi example fail on yarn

I am setting up a small yarn/spark cluster. hadoop/yarn version is
2.7.3 and I can run wordcount map-reduce correctly in yarn.
And I am using  spark-2.0.1-bin-hadoop2.7 using command:
~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
org.apache.spark.examples.SparkPi --master yarn-client
examples/jars/spark-examples_2.11-2.0.1.jar 10000
it fails and the first error is:
16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
16/10/20 18:12:03 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
16/10/20 18:12:12 INFO
cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
registered as NettyRpcEndpointRef(null)
16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
/proxy/application_1476957324184_0002
16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
SchedulerBackend is ready for scheduling beginning after waiting
maxRegisteredResourcesWaitingTime: 30000(ms)
16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
SparkContext, some configuration may not take effect.
16/10/20 18:12:12 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
16/10/20 18:12:12 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
16/10/20 18:12:12 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
16/10/20 18:12:12 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
16/10/20 18:12:12 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
'/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
SparkPi.scala:38
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
SparkPi.scala:38) with 10000 output partitions
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
ResultStage 0 (reduce at SparkPi.scala:38)
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
missing parents
16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
values in memory (estimated size 1832.0 B, free 366.3 MB)
16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
free: 366.3 MB)
16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
broadcast at DAGScheduler.scala:1012
16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
SparkPi.scala:34)
16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
10000 tasks
16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
application has already exited with state FINISHED!
16/10/20 18:12:14 INFO server.ServerConnector: Stopped
ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}


I also use yarn log to get logs from yarn(total log is very lengthy in
attachement):
16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH ->
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR ->
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR ->
hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
    SPARK_USER -> hadoop
    SPARK_YARN_MODE -> true
    SPARK_LOG_URL_STDOUT ->
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096

  command:
    {{JAVA_HOME}}/bin/java -server -Xmx1024m
-Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
-Dspark.yarn.app.container.log.dir=<LOG_DIR>
-XX:OnOutOfMemoryError='kill %p'
org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
--hostname ai-hz1-spark3 --cores 1 --app-id
application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================

16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
proxy : ai-hz1-spark5:55857
16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
proxy : ai-hz1-spark3:51061
16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
status was reported.)
16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called

Re: spark pi example fail on yarn

Posted by Li Li <fa...@gmail.com>.
yes, when I use yarn-cluster mode, it's correct. What's wrong with
yarn-client? the spark shell is also not work because it's client
mode. Any solution for this?

On Thu, Oct 20, 2016 at 11:32 PM, Amit Tank
<am...@gmail.com> wrote:
> I recently started learning spark so I may be completely wrong here but I
> was facing similar problem with sparkpi on yarn. After changing yarn to
> cluster mode it worked perfectly fine.
>
> Thank you,
> Amit
>
>
> On Thursday, October 20, 2016, Saisai Shao <sa...@gmail.com> wrote:
>>
>> Looks like ApplicationMaster is killed by SIGTERM.
>>
>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>
>> This container may be killed by yarn NodeManager or other processes, you'd
>> better check yarn log to dig out more details.
>>
>> Thanks
>> Saisai
>>
>> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>>
>>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>>> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>>> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>>> org.apache.spark.examples.SparkPi --master yarn-client
>>> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>>> it fails and the first error is:
>>> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>>> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>>> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO
>>> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>>> registered as NettyRpcEndpointRef(null)
>>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>>> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>>> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>>> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>>> /proxy/application_1476957324184_0002
>>> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>>> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>>> SchedulerBackend is ready for scheduling beginning after waiting
>>> maxRegisteredResourcesWaitingTime: 30000(ms)
>>> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>>> SparkContext, some configuration may not take effect.
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>
>>> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>>> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>>> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>>> SparkPi.scala:38
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>>> SparkPi.scala:38) with 10000 output partitions
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>>> ResultStage 0 (reduce at SparkPi.scala:38)
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>>> List()
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>>> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>>> missing parents
>>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>>> values in memory (estimated size 1832.0 B, free 366.3 MB)
>>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>>> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>>> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>>> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>>> free: 366.3 MB)
>>> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>>> broadcast at DAGScheduler.scala:1012
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>>> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>>> SparkPi.scala:34)
>>> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>>> 10000 tasks
>>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>> application has already exited with state FINISHED!
>>> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>>> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>
>>> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>>
>>>
>>> I also use yarn log to get logs from yarn(total log is very lengthy in
>>> attachement):
>>> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>>
>>> ===============================================================================
>>> YARN executor launch context:
>>>   env:
>>>     CLASSPATH ->
>>>
>>> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>>     SPARK_LOG_URL_STDERR ->
>>>
>>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>>     SPARK_YARN_STAGING_DIR ->
>>>
>>> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>>     SPARK_USER -> hadoop
>>>     SPARK_YARN_MODE -> true
>>>     SPARK_LOG_URL_STDOUT ->
>>>
>>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>>
>>>   command:
>>>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>>> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>>> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>>> -XX:OnOutOfMemoryError='kill %p'
>>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>>> --hostname ai-hz1-spark3 --cores 1 --app-id
>>> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>>> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>>
>>> ===============================================================================
>>>
>>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> proxy : ai-hz1-spark5:55857
>>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> proxy : ai-hz1-spark3:51061
>>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>>> status was reported.)
>>> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by Li Li <fa...@gmail.com>.
which log file should I check?

On Thu, Oct 20, 2016 at 11:32 PM, Amit Tank
<am...@gmail.com> wrote:
> I recently started learning spark so I may be completely wrong here but I
> was facing similar problem with sparkpi on yarn. After changing yarn to
> cluster mode it worked perfectly fine.
>
> Thank you,
> Amit
>
>
> On Thursday, October 20, 2016, Saisai Shao <sa...@gmail.com> wrote:
>>
>> Looks like ApplicationMaster is killed by SIGTERM.
>>
>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>
>> This container may be killed by yarn NodeManager or other processes, you'd
>> better check yarn log to dig out more details.
>>
>> Thanks
>> Saisai
>>
>> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>>
>>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>>> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>>> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>>> org.apache.spark.examples.SparkPi --master yarn-client
>>> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>>> it fails and the first error is:
>>> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>>> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>>> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO
>>> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>>> registered as NettyRpcEndpointRef(null)
>>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>>> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>>> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>>> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>>> /proxy/application_1476957324184_0002
>>> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>>> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>>> SchedulerBackend is ready for scheduling beginning after waiting
>>> maxRegisteredResourcesWaitingTime: 30000(ms)
>>> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>>> SparkContext, some configuration may not take effect.
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>
>>> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>>> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>>> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>>> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>>> SparkPi.scala:38
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>>> SparkPi.scala:38) with 10000 output partitions
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>>> ResultStage 0 (reduce at SparkPi.scala:38)
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>>> List()
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>>> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>>> missing parents
>>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>>> values in memory (estimated size 1832.0 B, free 366.3 MB)
>>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>>> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>>> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>>> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>>> free: 366.3 MB)
>>> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>>> broadcast at DAGScheduler.scala:1012
>>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>>> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>>> SparkPi.scala:34)
>>> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>>> 10000 tasks
>>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>> application has already exited with state FINISHED!
>>> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>>> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>
>>> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>>
>>>
>>> I also use yarn log to get logs from yarn(total log is very lengthy in
>>> attachement):
>>> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>>
>>> ===============================================================================
>>> YARN executor launch context:
>>>   env:
>>>     CLASSPATH ->
>>>
>>> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>>     SPARK_LOG_URL_STDERR ->
>>>
>>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>>     SPARK_YARN_STAGING_DIR ->
>>>
>>> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>>     SPARK_USER -> hadoop
>>>     SPARK_YARN_MODE -> true
>>>     SPARK_LOG_URL_STDOUT ->
>>>
>>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>>
>>>   command:
>>>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>>> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>>> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>>> -XX:OnOutOfMemoryError='kill %p'
>>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>>> --hostname ai-hz1-spark3 --cores 1 --app-id
>>> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>>> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>>
>>> ===============================================================================
>>>
>>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> proxy : ai-hz1-spark5:55857
>>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> proxy : ai-hz1-spark3:51061
>>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>>> status was reported.)
>>> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by Amit Tank <am...@gmail.com>.
I recently started learning spark so I may be completely wrong here but I
was facing similar problem with sparkpi on yarn. After changing yarn to
cluster mode it worked perfectly fine.

Thank you,
Amit

On Thursday, October 20, 2016, Saisai Shao <sa...@gmail.com> wrote:

> Looks like ApplicationMaster is killed by SIGTERM.
>
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>
> This container may be killed by yarn NodeManager or other processes, you'd
> better check yarn log to dig out more details.
>
> Thanks
> Saisai
>
> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fancyerii@gmail.com
> <javascript:_e(%7B%7D,'cvml','fancyerii@gmail.com');>> wrote:
>
>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>> org.apache.spark.examples.SparkPi --master yarn-client
>> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>> it fails and the first error is:
>> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO
>> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>> registered as NettyRpcEndpointRef(null)
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>> /proxy/application_1476957324184_0002
>> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>> SchedulerBackend is ready for scheduling beginning after waiting
>> maxRegisteredResourcesWaitingTime: 30000(ms)
>> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>> SparkContext, some configuration may not take effect.
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,n
>> ull,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>> SparkPi.scala:38
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>> SparkPi.scala:38) with 10000 output partitions
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>> ResultStage 0 (reduce at SparkPi.scala:38)
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>> List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>> missing parents
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>> values in memory (estimated size 1832.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>> free: 366.3 MB)
>> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>> broadcast at DAGScheduler.scala:1012
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>> SparkPi.scala:34)
>> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>> 10000 tasks
>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>> application has already exited with state FINISHED!
>> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,nul
>> l,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>
>>
>> I also use yarn log to get logs from yarn(total log is very lengthy in
>> attachement):
>> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>> ============================================================
>> ===================
>> YARN executor launch context:
>>   env:
>>     CLASSPATH ->
>> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs_
>> _/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoo
>> p/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*
>> <CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_
>> HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/
>> share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/
>> lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$
>> HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>     SPARK_LOG_URL_STDERR ->
>> http://ai-hz1-spark3:8042/node/containerlogs/container_14769
>> 57324184_0002_01_000003/hadoop/stderr?start=-4096
>>     SPARK_YARN_STAGING_DIR ->
>> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_
>> 1476957324184_0002
>>     SPARK_USER -> hadoop
>>     SPARK_YARN_MODE -> true
>>     SPARK_LOG_URL_STDOUT ->
>> http://ai-hz1-spark3:8042/node/containerlogs/container_14769
>> 57324184_0002_01_000003/hadoop/stdout?start=-4096
>>
>>   command:
>>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>> -XX:OnOutOfMemoryError='kill %p'
>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>> --hostname ai-hz1-spark3 --cores 1 --app-id
>> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>> ============================================================
>> ===================
>>
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark5:55857
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark3:51061
>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>> status was reported.)
>> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>> <javascript:_e(%7B%7D,'cvml','user-unsubscribe@spark.apache.org');>
>>
>
>

Re: spark pi example fail on yarn

Posted by Xi Shen <da...@gmail.com>.
I see, I had this issue before. I think you are using Java 8, right?
Because Java 8 JVM requires more bootstrap heap memory.

Turning off the memory check is an unsafe way to avoid this issue. I think
it is better to increase the memory ratio, like this:

  <property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>3.15</value>
  </property>


On Fri, Oct 21, 2016 at 11:15 AM Li Li <fa...@gmail.com> wrote:

I modified yarn-site.xml yarn.nodemanager.vmem-check-enabled to false
and it works for yarn-client and spark-shell

On Fri, Oct 21, 2016 at 10:59 AM, Li Li <fa...@gmail.com> wrote:
> I found a warn in nodemanager log. is the virtual memory exceed? how
> should I config yarn to solve this problem?
>
> 2016-10-21 10:41:12,588 INFO
>
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Memory usage of ProcessTree 20299 for container-id
> container_1477017445921_0001_02_000001: 335.1 MB of 1 GB physical
> memory used; 2.2 GB of 2.1 GB virtual memory used
> 2016-10-21 10:41:12,589 WARN
>
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Process tree for container: container_1477017445921_0001_02_000001 has
> processes older than 1 iteration running over the configured limit.
> Limit=2254857728, current usage = 2338873344
>
> On Fri, Oct 21, 2016 at 8:49 AM, Saisai Shao <sa...@gmail.com>
wrote:
>> It is not Spark has difficulty to communicate with YARN, it simply means
AM
>> is exited with FINISHED state.
>>
>> I'm guessing it might be related to memory constraints for container,
please
>> check the yarn RM and NM logs to find out more details.
>>
>> Thanks
>> Saisai
>>
>> On Fri, Oct 21, 2016 at 8:14 AM, Xi Shen <da...@gmail.com> wrote:
>>>
>>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>> application has already exited with state FINISHED!
>>>
>>>  From this, I think it is spark has difficult communicating with YARN.
You
>>> should check your Spark log.
>>>
>>>
>>> On Fri, Oct 21, 2016 at 8:06 AM Li Li <fa...@gmail.com> wrote:
>>>>
>>>> which log file should I
>>>>
>>>> On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com>
>>>> wrote:
>>>> > Looks like ApplicationMaster is killed by SIGTERM.
>>>> >
>>>> > 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>>> > 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>>> >
>>>> > This container may be killed by yarn NodeManager or other processes,
>>>> > you'd
>>>> > better check yarn log to dig out more details.
>>>> >
>>>> > Thanks
>>>> > Saisai
>>>> >
>>>> > On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>>> >>
>>>> >> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>>> >> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>>>> >> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>>>> >> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>>>> >> org.apache.spark.examples.SparkPi --master yarn-client
>>>> >> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>>>> >> it fails and the first error is:
>>>> >> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>>>> >> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>>>> >> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO
>>>> >> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster
>>>> >> registered as NettyRpcEndpointRef(null)
>>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>>>> >> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>>>> >> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>>>> >> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>>>> >> /proxy/application_1476957324184_0002
>>>> >> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>>>> >> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>>>> >> SchedulerBackend is ready for scheduling beginning after waiting
>>>> >> maxRegisteredResourcesWaitingTime: 30000(ms)
>>>> >> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>>>> >> SparkContext, some configuration may not take effect.
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@378f002a
{/SQL/execution,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >>
>>>> >> o.s.j.s.ServletContextHandler@2cc75074
{/SQL/execution/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>>>> >> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>>>> >> SparkPi.scala:38
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>>>> >> SparkPi.scala:38) with 10000 output partitions
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>>>> >> ResultStage 0 (reduce at SparkPi.scala:38)
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final
stage:
>>>> >> List()
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents:
List()
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting
ResultStage
>>>> >> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>>>> >> missing parents
>>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored
as
>>>> >> values in memory (estimated size 1832.0 B, free 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>>>> >> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>>>> >> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0
B,
>>>> >> free: 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>>>> >> broadcast at DAGScheduler.scala:1012
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>>>> >> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>>>> >> SparkPi.scala:34)
>>>> >> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0
with
>>>> >> 10000 tasks
>>>> >> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>>> >> application has already exited with state FINISHED!
>>>> >> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>>>> >> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >>
>>>> >> o.s.j.s.ServletContextHandler@841e575
{/stages/stage/kill,null,UNAVAILABLE}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>>> >>
>>>> >>
>>>> >> I also use yarn log to get logs from yarn(total log is very lengthy
in
>>>> >> attachement):
>>>> >> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>>> >>
>>>> >>
>>>> >>
===============================================================================
>>>> >> YARN executor launch context:
>>>> >>   env:
>>>> >>     CLASSPATH ->
>>>> >>
>>>> >>
>>>> >>
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>>> >>     SPARK_LOG_URL_STDERR ->
>>>> >>
>>>> >>
>>>> >>
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>>> >>     SPARK_YARN_STAGING_DIR ->
>>>> >>
>>>> >>
>>>> >>
hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>>> >>     SPARK_USER -> hadoop
>>>> >>     SPARK_YARN_MODE -> true
>>>> >>     SPARK_LOG_URL_STDOUT ->
>>>> >>
>>>> >>
>>>> >>
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>>> >>
>>>> >>   command:
>>>> >>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>>>> >> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>>>> >> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>>>> >> -XX:OnOutOfMemoryError='kill %p'
>>>> >> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>>> >> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>>>> >> --hostname ai-hz1-spark3 --cores 1 --app-id
>>>> >> application_1476957324184_0002 --user-class-path
file:$PWD/__app__.jar
>>>> >> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>>> >>
>>>> >>
>>>> >>
===============================================================================
>>>> >>
>>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy:
Opening
>>>> >> proxy : ai-hz1-spark5:55857
>>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy:
Opening
>>>> >> proxy : ai-hz1-spark3:51061
>>>> >> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>>> >> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>>> >> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>>>> >> status was reported.)
>>>> >> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook
called
>>>> >>
>>>> >>
>>>> >>
---------------------------------------------------------------------
>>>> >> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>>> >
>>>> >
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>>>
>>> --
>>>
>>>
>>> Thanks,
>>> David S.
>>
>>

-- 


Thanks,
David S.

Re: spark pi example fail on yarn

Posted by Li Li <fa...@gmail.com>.
I modified yarn-site.xml yarn.nodemanager.vmem-check-enabled to false
and it works for yarn-client and spark-shell

On Fri, Oct 21, 2016 at 10:59 AM, Li Li <fa...@gmail.com> wrote:
> I found a warn in nodemanager log. is the virtual memory exceed? how
> should I config yarn to solve this problem?
>
> 2016-10-21 10:41:12,588 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Memory usage of ProcessTree 20299 for container-id
> container_1477017445921_0001_02_000001: 335.1 MB of 1 GB physical
> memory used; 2.2 GB of 2.1 GB virtual memory used
> 2016-10-21 10:41:12,589 WARN
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Process tree for container: container_1477017445921_0001_02_000001 has
> processes older than 1 iteration running over the configured limit.
> Limit=2254857728, current usage = 2338873344
>
> On Fri, Oct 21, 2016 at 8:49 AM, Saisai Shao <sa...@gmail.com> wrote:
>> It is not Spark has difficulty to communicate with YARN, it simply means AM
>> is exited with FINISHED state.
>>
>> I'm guessing it might be related to memory constraints for container, please
>> check the yarn RM and NM logs to find out more details.
>>
>> Thanks
>> Saisai
>>
>> On Fri, Oct 21, 2016 at 8:14 AM, Xi Shen <da...@gmail.com> wrote:
>>>
>>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>> application has already exited with state FINISHED!
>>>
>>>  From this, I think it is spark has difficult communicating with YARN. You
>>> should check your Spark log.
>>>
>>>
>>> On Fri, Oct 21, 2016 at 8:06 AM Li Li <fa...@gmail.com> wrote:
>>>>
>>>> which log file should I
>>>>
>>>> On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com>
>>>> wrote:
>>>> > Looks like ApplicationMaster is killed by SIGTERM.
>>>> >
>>>> > 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>>> > 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>>> >
>>>> > This container may be killed by yarn NodeManager or other processes,
>>>> > you'd
>>>> > better check yarn log to dig out more details.
>>>> >
>>>> > Thanks
>>>> > Saisai
>>>> >
>>>> > On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>>> >>
>>>> >> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>>> >> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>>>> >> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>>>> >> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>>>> >> org.apache.spark.examples.SparkPi --master yarn-client
>>>> >> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>>>> >> it fails and the first error is:
>>>> >> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>>>> >> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>>>> >> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO
>>>> >> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>>>> >> registered as NettyRpcEndpointRef(null)
>>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>>>> >> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>>>> >> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>>>> >> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>>>> >> /proxy/application_1476957324184_0002
>>>> >> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>>>> >> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>>>> >> SchedulerBackend is ready for scheduling beginning after waiting
>>>> >> maxRegisteredResourcesWaitingTime: 30000(ms)
>>>> >> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>>>> >> SparkContext, some configuration may not take effect.
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >>
>>>> >> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>>> >> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>>>> >> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>>>> >> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>>>> >> SparkPi.scala:38
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>>>> >> SparkPi.scala:38) with 10000 output partitions
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>>>> >> ResultStage 0 (reduce at SparkPi.scala:38)
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>>>> >> List()
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>>>> >> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>>>> >> missing parents
>>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>>>> >> values in memory (estimated size 1832.0 B, free 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>>>> >> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>>>> >> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>>>> >> free: 366.3 MB)
>>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>>>> >> broadcast at DAGScheduler.scala:1012
>>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>>>> >> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>>>> >> SparkPi.scala:34)
>>>> >> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>>>> >> 10000 tasks
>>>> >> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>>> >> application has already exited with state FINISHED!
>>>> >> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>>>> >> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >>
>>>> >> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>>> >> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>>> >>
>>>> >>
>>>> >> I also use yarn log to get logs from yarn(total log is very lengthy in
>>>> >> attachement):
>>>> >> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>>> >>
>>>> >>
>>>> >> ===============================================================================
>>>> >> YARN executor launch context:
>>>> >>   env:
>>>> >>     CLASSPATH ->
>>>> >>
>>>> >>
>>>> >> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>>> >>     SPARK_LOG_URL_STDERR ->
>>>> >>
>>>> >>
>>>> >> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>>> >>     SPARK_YARN_STAGING_DIR ->
>>>> >>
>>>> >>
>>>> >> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>>> >>     SPARK_USER -> hadoop
>>>> >>     SPARK_YARN_MODE -> true
>>>> >>     SPARK_LOG_URL_STDOUT ->
>>>> >>
>>>> >>
>>>> >> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>>> >>
>>>> >>   command:
>>>> >>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>>>> >> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>>>> >> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>>>> >> -XX:OnOutOfMemoryError='kill %p'
>>>> >> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>>> >> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>>>> >> --hostname ai-hz1-spark3 --cores 1 --app-id
>>>> >> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>>>> >> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>>> >>
>>>> >>
>>>> >> ===============================================================================
>>>> >>
>>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>>> >> proxy : ai-hz1-spark5:55857
>>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>>> >> proxy : ai-hz1-spark3:51061
>>>> >> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>>> >> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>>> >> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>>>> >> status was reported.)
>>>> >> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>>> >>
>>>> >>
>>>> >> ---------------------------------------------------------------------
>>>> >> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>>> >
>>>> >
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>>>
>>> --
>>>
>>>
>>> Thanks,
>>> David S.
>>
>>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by Li Li <fa...@gmail.com>.
I found a warn in nodemanager log. is the virtual memory exceed? how
should I config yarn to solve this problem?

2016-10-21 10:41:12,588 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 20299 for container-id
container_1477017445921_0001_02_000001: 335.1 MB of 1 GB physical
memory used; 2.2 GB of 2.1 GB virtual memory used
2016-10-21 10:41:12,589 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Process tree for container: container_1477017445921_0001_02_000001 has
processes older than 1 iteration running over the configured limit.
Limit=2254857728, current usage = 2338873344

On Fri, Oct 21, 2016 at 8:49 AM, Saisai Shao <sa...@gmail.com> wrote:
> It is not Spark has difficulty to communicate with YARN, it simply means AM
> is exited with FINISHED state.
>
> I'm guessing it might be related to memory constraints for container, please
> check the yarn RM and NM logs to find out more details.
>
> Thanks
> Saisai
>
> On Fri, Oct 21, 2016 at 8:14 AM, Xi Shen <da...@gmail.com> wrote:
>>
>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>> application has already exited with state FINISHED!
>>
>>  From this, I think it is spark has difficult communicating with YARN. You
>> should check your Spark log.
>>
>>
>> On Fri, Oct 21, 2016 at 8:06 AM Li Li <fa...@gmail.com> wrote:
>>>
>>> which log file should I
>>>
>>> On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com>
>>> wrote:
>>> > Looks like ApplicationMaster is killed by SIGTERM.
>>> >
>>> > 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>> > 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>> >
>>> > This container may be killed by yarn NodeManager or other processes,
>>> > you'd
>>> > better check yarn log to dig out more details.
>>> >
>>> > Thanks
>>> > Saisai
>>> >
>>> > On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>> >>
>>> >> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>> >> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>>> >> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>>> >> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>>> >> org.apache.spark.examples.SparkPi --master yarn-client
>>> >> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>>> >> it fails and the first error is:
>>> >> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>>> >> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>>> >> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>>> >> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO
>>> >> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>>> >> registered as NettyRpcEndpointRef(null)
>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>>> >> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>>> >> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>>> >> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>>> >> /proxy/application_1476957324184_0002
>>> >> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>>> >> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>>> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>>> >> SchedulerBackend is ready for scheduling beginning after waiting
>>> >> maxRegisteredResourcesWaitingTime: 30000(ms)
>>> >> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>>> >> SparkContext, some configuration may not take effect.
>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> >> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> >> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> >> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> >>
>>> >> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>>> >> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>>> >> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>>> >> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>>> >> SparkPi.scala:38
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>>> >> SparkPi.scala:38) with 10000 output partitions
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>>> >> ResultStage 0 (reduce at SparkPi.scala:38)
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>>> >> List()
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>>> >> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>>> >> missing parents
>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>>> >> values in memory (estimated size 1832.0 B, free 366.3 MB)
>>> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>>> >> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>>> >> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>>> >> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>>> >> free: 366.3 MB)
>>> >> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>>> >> broadcast at DAGScheduler.scala:1012
>>> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>>> >> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>>> >> SparkPi.scala:34)
>>> >> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>>> >> 10000 tasks
>>> >> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>>> >> application has already exited with state FINISHED!
>>> >> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>>> >> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> >>
>>> >> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> >> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>>> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>>> >> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>> >>
>>> >>
>>> >> I also use yarn log to get logs from yarn(total log is very lengthy in
>>> >> attachement):
>>> >> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>> >>
>>> >>
>>> >> ===============================================================================
>>> >> YARN executor launch context:
>>> >>   env:
>>> >>     CLASSPATH ->
>>> >>
>>> >>
>>> >> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>> >>     SPARK_LOG_URL_STDERR ->
>>> >>
>>> >>
>>> >> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>> >>     SPARK_YARN_STAGING_DIR ->
>>> >>
>>> >>
>>> >> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>> >>     SPARK_USER -> hadoop
>>> >>     SPARK_YARN_MODE -> true
>>> >>     SPARK_LOG_URL_STDOUT ->
>>> >>
>>> >>
>>> >> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>> >>
>>> >>   command:
>>> >>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>>> >> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>>> >> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>>> >> -XX:OnOutOfMemoryError='kill %p'
>>> >> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>> >> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>>> >> --hostname ai-hz1-spark3 --cores 1 --app-id
>>> >> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>>> >> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>> >>
>>> >>
>>> >> ===============================================================================
>>> >>
>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> >> proxy : ai-hz1-spark5:55857
>>> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>>> >> proxy : ai-hz1-spark3:51061
>>> >> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>>> >> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>> >> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>>> >> status was reported.)
>>> >> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>> >>
>>> >>
>>> >> ---------------------------------------------------------------------
>>> >> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>> >
>>> >
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>>
>> --
>>
>>
>> Thanks,
>> David S.
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by Saisai Shao <sa...@gmail.com>.
It is not Spark has difficulty to communicate with YARN, it simply means AM
is exited with FINISHED state.

I'm guessing it might be related to memory constraints for container,
please check the yarn RM and NM logs to find out more details.

Thanks
Saisai

On Fri, Oct 21, 2016 at 8:14 AM, Xi Shen <da...@gmail.com> wrote:

> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
> application has already exited with state FINISHED!
>
>  From this, I think it is spark has difficult communicating with YARN. You
> should check your Spark log.
>
>
> On Fri, Oct 21, 2016 at 8:06 AM Li Li <fa...@gmail.com> wrote:
>
> which log file should I
>
> On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com>
> wrote:
> > Looks like ApplicationMaster is killed by SIGTERM.
> >
> > 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> > 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
> >
> > This container may be killed by yarn NodeManager or other processes,
> you'd
> > better check yarn log to dig out more details.
> >
> > Thanks
> > Saisai
> >
> > On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
> >>
> >> I am setting up a small yarn/spark cluster. hadoop/yarn version is
> >> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
> >> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
> >> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
> >> org.apache.spark.examples.SparkPi --master yarn-client
> >> examples/jars/spark-examples_2.11-2.0.1.jar 10000
> >> it fails and the first error is:
> >> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
> >> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
> >> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO
> >> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
> >> registered as NettyRpcEndpointRef(null)
> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
> >> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
> >> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
> >> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
> >> /proxy/application_1476957324184_0002
> >> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
> >> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> >> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
> >> SchedulerBackend is ready for scheduling beginning after waiting
> >> maxRegisteredResourcesWaitingTime: 30000(ms)
> >> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
> >> SparkContext, some configuration may not take effect.
> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,
> null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> >> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
> >> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
> >> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
> >> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
> >> SparkPi.scala:38
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
> >> SparkPi.scala:38) with 10000 output partitions
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
> >> ResultStage 0 (reduce at SparkPi.scala:38)
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
> >> List()
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
> >> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
> >> missing parents
> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
> >> values in memory (estimated size 1832.0 B, free 366.3 MB)
> >> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
> >> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
> >> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
> >> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
> >> free: 366.3 MB)
> >> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
> >> broadcast at DAGScheduler.scala:1012
> >> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
> >> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
> >> SparkPi.scala:34)
> >> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
> >> 10000 tasks
> >> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
> >> application has already exited with state FINISHED!
> >> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
> >> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> >> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,
> null,UNAVAILABLE}
> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> >> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
> >> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> >> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
> >>
> >>
> >> I also use yarn log to get logs from yarn(total log is very lengthy in
> >> attachement):
> >> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
> >>
> >> ============================================================
> ===================
> >> YARN executor launch context:
> >>   env:
> >>     CLASSPATH ->
> >>
> >> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_
> libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/
> hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/
> common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$
> HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_
> HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/
> yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<
> CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
> >>     SPARK_LOG_URL_STDERR ->
> >>
> >> http://ai-hz1-spark3:8042/node/containerlogs/container_
> 1476957324184_0002_01_000003/hadoop/stderr?start=-4096
> >>     SPARK_YARN_STAGING_DIR ->
> >>
> >> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/
> application_1476957324184_0002
> >>     SPARK_USER -> hadoop
> >>     SPARK_YARN_MODE -> true
> >>     SPARK_LOG_URL_STDOUT ->
> >>
> >> http://ai-hz1-spark3:8042/node/containerlogs/container_
> 1476957324184_0002_01_000003/hadoop/stdout?start=-4096
> >>
> >>   command:
> >>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
> >> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
> >> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
> >> -XX:OnOutOfMemoryError='kill %p'
> >> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> >> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
> >> --hostname ai-hz1-spark3 --cores 1 --app-id
> >> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
> >> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
> >>
> >> ============================================================
> ===================
> >>
> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
> >> proxy : ai-hz1-spark5:55857
> >> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
> >> proxy : ai-hz1-spark3:51061
> >> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> >> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
> >> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
> >> status was reported.)
> >> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
> --
>
>
> Thanks,
> David S.
>

Re: spark pi example fail on yarn

Posted by Xi Shen <da...@gmail.com>.
16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
application has already exited with state FINISHED!

 From this, I think it is spark has difficult communicating with YARN. You
should check your Spark log.


On Fri, Oct 21, 2016 at 8:06 AM Li Li <fa...@gmail.com> wrote:

which log file should I

On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com>
wrote:
> Looks like ApplicationMaster is killed by SIGTERM.
>
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>
> This container may be killed by yarn NodeManager or other processes, you'd
> better check yarn log to dig out more details.
>
> Thanks
> Saisai
>
> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>> org.apache.spark.examples.SparkPi --master yarn-client
>> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>> it fails and the first error is:
>> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO
>> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>> registered as NettyRpcEndpointRef(null)
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>> /proxy/application_1476957324184_0002
>> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>> SchedulerBackend is ready for scheduling beginning after waiting
>> maxRegisteredResourcesWaitingTime: 30000(ms)
>> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>> SparkContext, some configuration may not take effect.
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2cc75074
{/SQL/execution/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>> SparkPi.scala:38
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>> SparkPi.scala:38) with 10000 output partitions
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>> ResultStage 0 (reduce at SparkPi.scala:38)
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>> List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>> missing parents
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>> values in memory (estimated size 1832.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>> free: 366.3 MB)
>> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>> broadcast at DAGScheduler.scala:1012
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>> SparkPi.scala:34)
>> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>> 10000 tasks
>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>> application has already exited with state FINISHED!
>> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@841e575
{/stages/stage/kill,null,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>
>>
>> I also use yarn log to get logs from yarn(total log is very lengthy in
>> attachement):
>> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>
>>
===============================================================================
>> YARN executor launch context:
>>   env:
>>     CLASSPATH ->
>>
>>
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>     SPARK_LOG_URL_STDERR ->
>>
>>
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>     SPARK_YARN_STAGING_DIR ->
>>
>>
hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>     SPARK_USER -> hadoop
>>     SPARK_YARN_MODE -> true
>>     SPARK_LOG_URL_STDOUT ->
>>
>>
http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>
>>   command:
>>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>> -XX:OnOutOfMemoryError='kill %p'
>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>> --hostname ai-hz1-spark3 --cores 1 --app-id
>> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>
>>
===============================================================================
>>
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark5:55857
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark3:51061
>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>> status was reported.)
>> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org

-- 


Thanks,
David S.

Re: spark pi example fail on yarn

Posted by Li Li <fa...@gmail.com>.
which log file should I

On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao <sa...@gmail.com> wrote:
> Looks like ApplicationMaster is killed by SIGTERM.
>
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>
> This container may be killed by yarn NodeManager or other processes, you'd
> better check yarn log to dig out more details.
>
> Thanks
> Saisai
>
> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
>> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>> org.apache.spark.examples.SparkPi --master yarn-client
>> examples/jars/spark-examples_2.11-2.0.1.jar 10000
>> it fails and the first error is:
>> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO
>> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>> registered as NettyRpcEndpointRef(null)
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
>> /proxy/application_1476957324184_0002
>> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>> SchedulerBackend is ready for scheduling beginning after waiting
>> maxRegisteredResourcesWaitingTime: 30000(ms)
>> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>> SparkContext, some configuration may not take effect.
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
>> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>> SparkPi.scala:38
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>> SparkPi.scala:38) with 10000 output partitions
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>> ResultStage 0 (reduce at SparkPi.scala:38)
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
>> List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>> missing parents
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>> values in memory (estimated size 1832.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
>> free: 366.3 MB)
>> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>> broadcast at DAGScheduler.scala:1012
>> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>> SparkPi.scala:34)
>> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>> 10000 tasks
>> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>> application has already exited with state FINISHED!
>> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>>
>>
>> I also use yarn log to get logs from yarn(total log is very lengthy in
>> attachement):
>> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>>
>> ===============================================================================
>> YARN executor launch context:
>>   env:
>>     CLASSPATH ->
>>
>> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>>     SPARK_LOG_URL_STDERR ->
>>
>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>>     SPARK_YARN_STAGING_DIR ->
>>
>> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>>     SPARK_USER -> hadoop
>>     SPARK_YARN_MODE -> true
>>     SPARK_LOG_URL_STDOUT ->
>>
>> http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>>
>>   command:
>>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
>> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>> -XX:OnOutOfMemoryError='kill %p'
>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
>> --hostname ai-hz1-spark3 --cores 1 --app-id
>> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>>
>> ===============================================================================
>>
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark5:55857
>> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>> proxy : ai-hz1-spark3:51061
>> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>> status was reported.)
>> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by "Elek, Marton" <hd...@anzix.net>.
Try to set the memory size limits. For example:

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master 
yarn     --deploy-mode cluster     --driver-memory 4g 
--executor-memory 2g     --executor-cores 1 
./examples/jars/spark-examples_2.11-2.0.0.2.5.2.0-47.jar

By default yarn prefers to kill containers not only by physical but 
virtual memory limit.

You could also try to set

yarn.nodemanager.vmem-check-enabled

to false (yarn-site.xml)

Regards
Marton


On 10/20/16 4:02 PM, Saisai Shao wrote:
> Looks like ApplicationMaster is killed by SIGTERM.
>
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>
> This container may be killed by yarn NodeManager or other processes,
> you'd better check yarn log to dig out more details.
>
> Thanks
> Saisai
>
> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fancyerii@gmail.com
> <ma...@gmail.com>> wrote:
>
>     I am setting up a small yarn/spark cluster. hadoop/yarn version is
>     2.7.3 and I can run wordcount map-reduce correctly in yarn.
>     And I am using  spark-2.0.1-bin-hadoop2.7 using command:
>     ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
>     org.apache.spark.examples.SparkPi --master yarn-client
>     examples/jars/spark-examples_2.11-2.0.1.jar 10000
>     it fails and the first error is:
>     16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
>     BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
>     16/10/20 18:12:03 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
>     16/10/20 18:12:12 INFO
>     cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
>     registered as NettyRpcEndpointRef(null)
>     16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
>     Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
>     Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
>     http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002
>     <http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002>),
>     /proxy/application_1476957324184_0002
>     16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
>     org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>     16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
>     SchedulerBackend is ready for scheduling beginning after waiting
>     maxRegisteredResourcesWaitingTime: 30000(ms)
>     16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
>     SparkContext, some configuration may not take effect.
>     16/10/20 18:12:12 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
>     16/10/20 18:12:12 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
>     16/10/20 18:12:12 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
>     16/10/20 18:12:12 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
>     16/10/20 18:12:12 INFO handler.ContextHandler: Started
>     o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
>     16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
>     '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
>     16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
>     SparkPi.scala:38
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
>     SparkPi.scala:38) with 10000 output partitions
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
>     ResultStage 0 (reduce at SparkPi.scala:38)
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final
>     stage: List()
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
>     0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
>     missing parents
>     16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
>     values in memory (estimated size 1832.0 B, free 366.3 MB)
>     16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
>     stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
>     16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
>     broadcast_0_piece0 in memory on 10.161.219.189:39161
>     <http://10.161.219.189:39161> (size: 1169.0 B,
>     free: 366.3 MB)
>     16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
>     broadcast at DAGScheduler.scala:1012
>     16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
>     missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
>     SparkPi.scala:34)
>     16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
>     10000 tasks
>     16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>     application has already exited with state FINISHED!
>     16/10/20 18:12:14 INFO server.ServerConnector: Stopped
>     ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040 <http://0.0.0.0:4040>}
>     16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>     o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
>     16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>     o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
>     16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
>     o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>
>
>     I also use yarn log to get logs from yarn(total log is very lengthy in
>     attachement):
>     16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
>     ===============================================================================
>     YARN executor launch context:
>       env:
>         CLASSPATH ->
>     {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>         SPARK_LOG_URL_STDERR ->
>     http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>     <http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stderr?start=-4096>
>         SPARK_YARN_STAGING_DIR ->
>     hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/application_1476957324184_0002
>         SPARK_USER -> hadoop
>         SPARK_YARN_MODE -> true
>         SPARK_LOG_URL_STDOUT ->
>     http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>     <http://ai-hz1-spark3:8042/node/containerlogs/container_1476957324184_0002_01_000003/hadoop/stdout?start=-4096>
>
>       command:
>         {{JAVA_HOME}}/bin/java -server -Xmx1024m
>     -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
>     -Dspark.yarn.app.container.log.dir=<LOG_DIR>
>     -XX:OnOutOfMemoryError='kill %p'
>     org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>     spark://CoarseGrainedScheduler@10.161.219.189:60657
>     <http://CoarseGrainedScheduler@10.161.219.189:60657> --executor-id 2
>     --hostname ai-hz1-spark3 --cores 1 --app-id
>     application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
>     1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
>     ===============================================================================
>
>     16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>     proxy : ai-hz1-spark5:55857
>     16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
>     proxy : ai-hz1-spark3:51061
>     16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
>     16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>     UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
>     status was reported.)
>     16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>
>
>     ---------------------------------------------------------------------
>     To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>     <ma...@spark.apache.org>
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark pi example fail on yarn

Posted by Saisai Shao <sa...@gmail.com>.
Looks like ApplicationMaster is killed by SIGTERM.

16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:

This container may be killed by yarn NodeManager or other processes, you'd
better check yarn log to dig out more details.

Thanks
Saisai

On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fa...@gmail.com> wrote:

> I am setting up a small yarn/spark cluster. hadoop/yarn version is
> 2.7.3 and I can run wordcount map-reduce correctly in yarn.
> And I am using  spark-2.0.1-bin-hadoop2.7 using command:
> ~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
> org.apache.spark.examples.SparkPi --master yarn-client
> examples/jars/spark-examples_2.11-2.0.1.jar 10000
> it fails and the first error is:
> 16/10/20 18:12:03 INFO storage.BlockManagerMaster: Registered
> BlockManager BlockManagerId(driver, 10.161.219.189, 39161)
> 16/10/20 18:12:03 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@76ad6715{/metrics/json,null,AVAILABLE}
> 16/10/20 18:12:12 INFO
> cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
> registered as NettyRpcEndpointRef(null)
> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend: Add WebUI
> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
> Map(PROXY_HOSTS -> ai-hz1-spark1, PROXY_URI_BASES ->
> http://ai-hz1-spark1:8088/proxy/application_1476957324184_0002),
> /proxy/application_1476957324184_0002
> 16/10/20 18:12:12 INFO ui.JettyUtils: Adding filter:
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 16/10/20 18:12:12 INFO cluster.YarnClientSchedulerBackend:
> SchedulerBackend is ready for scheduling beginning after waiting
> maxRegisteredResourcesWaitingTime: 30000(ms)
> 16/10/20 18:12:12 WARN spark.SparkContext: Use an existing
> SparkContext, some configuration may not take effect.
> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@489091bd{/SQL,null,AVAILABLE}
> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@1de9b505{/SQL/json,null,AVAILABLE}
> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@378f002a{/SQL/execution,null,AVAILABLE}
> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@2cc75074{/SQL/execution/json,null,AVAILABLE}
> 16/10/20 18:12:12 INFO handler.ContextHandler: Started
> o.s.j.s.ServletContextHandler@2d64160c{/static/sql,null,AVAILABLE}
> 16/10/20 18:12:12 INFO internal.SharedState: Warehouse path is
> '/home/hadoop/spark-2.0.1-bin-hadoop2.7/spark-warehouse'.
> 16/10/20 18:12:13 INFO spark.SparkContext: Starting job: reduce at
> SparkPi.scala:38
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Got job 0 (reduce at
> SparkPi.scala:38) with 10000 output partitions
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Final stage:
> ResultStage 0 (reduce at SparkPi.scala:38)
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Parents of final stage:
> List()
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Missing parents: List()
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting ResultStage
> 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no
> missing parents
> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0 stored as
> values in memory (estimated size 1832.0 B, free 366.3 MB)
> 16/10/20 18:12:13 INFO memory.MemoryStore: Block broadcast_0_piece0
> stored as bytes in memory (estimated size 1169.0 B, free 366.3 MB)
> 16/10/20 18:12:13 INFO storage.BlockManagerInfo: Added
> broadcast_0_piece0 in memory on 10.161.219.189:39161 (size: 1169.0 B,
> free: 366.3 MB)
> 16/10/20 18:12:13 INFO spark.SparkContext: Created broadcast 0 from
> broadcast at DAGScheduler.scala:1012
> 16/10/20 18:12:13 INFO scheduler.DAGScheduler: Submitting 10000
> missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at
> SparkPi.scala:34)
> 16/10/20 18:12:13 INFO cluster.YarnScheduler: Adding task set 0.0 with
> 10000 tasks
> 16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
> application has already exited with state FINISHED!
> 16/10/20 18:12:14 INFO server.ServerConnector: Stopped
> ServerConnector@389adf1d{HTTP/1.1}{0.0.0.0:4040}
> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> o.s.j.s.ServletContextHandler@841e575{/stages/stage/kill,null,UNAVAILABLE}
> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> o.s.j.s.ServletContextHandler@66629f63{/api,null,UNAVAILABLE}
> 16/10/20 18:12:14 INFO handler.ContextHandler: Stopped
> o.s.j.s.ServletContextHandler@2b62442c{/,null,UNAVAILABLE}
>
>
> I also use yarn log to get logs from yarn(total log is very lengthy in
> attachement):
> 16/10/20 18:12:03 INFO yarn.ExecutorRunnable:
> ============================================================
> ===================
> YARN executor launch context:
>   env:
>     CLASSPATH ->
> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_
> libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/
> hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/
> common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$
> HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_
> HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/
> yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<
> CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
>     SPARK_LOG_URL_STDERR ->
> http://ai-hz1-spark3:8042/node/containerlogs/container_
> 1476957324184_0002_01_000003/hadoop/stderr?start=-4096
>     SPARK_YARN_STAGING_DIR ->
> hdfs://ai-hz1-spark1/user/hadoop/.sparkStaging/
> application_1476957324184_0002
>     SPARK_USER -> hadoop
>     SPARK_YARN_MODE -> true
>     SPARK_LOG_URL_STDOUT ->
> http://ai-hz1-spark3:8042/node/containerlogs/container_
> 1476957324184_0002_01_000003/hadoop/stdout?start=-4096
>
>   command:
>     {{JAVA_HOME}}/bin/java -server -Xmx1024m
> -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60657'
> -Dspark.yarn.app.container.log.dir=<LOG_DIR>
> -XX:OnOutOfMemoryError='kill %p'
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@10.161.219.189:60657 --executor-id 2
> --hostname ai-hz1-spark3 --cores 1 --app-id
> application_1476957324184_0002 --user-class-path file:$PWD/__app__.jar
> 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
> ============================================================
> ===================
>
> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : ai-hz1-spark5:55857
> 16/10/20 18:12:03 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : ai-hz1-spark3:51061
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
> UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final
> status was reported.)
> 16/10/20 18:12:04 INFO util.ShutdownHookManager: Shutdown hook called
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>