You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "JiYeon OH (JIRA)" <ji...@apache.org> on 2017/06/16 08:51:00 UTC

[jira] [Comment Edited] (SPARK-12009) Avoid re-allocate yarn container while driver want to stop all Executors

    [ https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16051607#comment-16051607 ] 

JiYeon OH edited comment on SPARK-12009 at 6/16/17 8:50 AM:
------------------------------------------------------------

I'm having the same problem with Spark 2.1.0
I have some jobs with exact same code and had a few jobs failed.
In the jobs that finished successfully, there was this message after the job finished:

17/06/15 00:26:02 INFO YarnAllocator: Driver requested a total number of 0 executor(s).

But in the jobs that failed, there was this message instead:

17/06/16 14:31:14 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(10,WrappedArray())

I'm guessing the YarnAllocator must have requested some executors after spark job was finished, but can't' find out why.
and why is YarnAllocator requesting executors after job finished???? Does anyone know why??


was (Author: ogcheeze):
I'm having the same problem with Spark 2.1.0
I have some jobs with exact same code and had a few jobs failed.
In the jobs that finished successfully, there was this message after the job finished

17/06/15 00:26:02 INFO YarnAllocator: Driver requested a total number of 0 executor(s).

But in the jobs that filaed, there was this message instead

17/06/16 14:31:14 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(10,WrappedArray())

I'm guessing the YarnAllocator must have requested some executors after spark job was finished, but can't' find out why.
and why is YarnAllocator requesting executors after job finished???? Does anyone know why??

> Avoid re-allocate yarn container while driver want to stop all Executors
> ------------------------------------------------------------------------
>
>                 Key: SPARK-12009
>                 URL: https://issues.apache.org/jira/browse/SPARK-12009
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.5.2
>            Reporter: SuYan
>            Assignee: SuYan
>            Priority: Minor
>             Fix For: 2.0.0
>
>
> Log based 1.4.0
> 2015-11-26,03:05:16,176 WARN org.spark-project.jetty.util.thread.QueuedThreadPool: 8 threads could not be stopped
> 2015-11-26,03:05:16,177 INFO org.apache.spark.ui.SparkUI: Stopped Spark web UI at http://
> 2015-11-26,03:05:16,401 INFO org.apache.spark.scheduler.DAGScheduler: Stopping DAGScheduler
> 2015-11-26,03:05:16,450 INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend: Shutting down all executors
> 2015-11-26,03:05:16,525 INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
> 2015-11-26,03:05:16,791 INFO org.apache.spark.deploy.yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. XX.XX.XX.XX:38734
> 2015-11-26,03:05:16,847 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(164,WrappedArray())
> 2015-11-26,03:05:27,242 INFO org.apache.spark.deploy.yarn.YarnAllocator: Will request 13 executor containers, each with 1 cores and 4608 MB memory including 1024 MB overhead



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org