You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ethan Xu (JIRA)" <ji...@apache.org> on 2017/05/03 02:56:04 UTC
[jira] [Commented] (SPARK-12009) Avoid re-allocate yarn container
while driver want to stop all Executors
[ https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994199#comment-15994199 ]
Ethan Xu commented on SPARK-12009:
----------------------------------
I'm getting similar error message in with Spark 2.1.0. I can't reproduce it. The exact same code worked fine on a small RDD (sample), but sometimes gave this error on large RDD after hours of ran. It's very frustrating.
> Avoid re-allocate yarn container while driver want to stop all Executors
> ------------------------------------------------------------------------
>
> Key: SPARK-12009
> URL: https://issues.apache.org/jira/browse/SPARK-12009
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.5.2
> Reporter: SuYan
> Assignee: SuYan
> Priority: Minor
> Fix For: 2.0.0
>
>
> Log based 1.4.0
> 2015-11-26,03:05:16,176 WARN org.spark-project.jetty.util.thread.QueuedThreadPool: 8 threads could not be stopped
> 2015-11-26,03:05:16,177 INFO org.apache.spark.ui.SparkUI: Stopped Spark web UI at http://
> 2015-11-26,03:05:16,401 INFO org.apache.spark.scheduler.DAGScheduler: Stopping DAGScheduler
> 2015-11-26,03:05:16,450 INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend: Shutting down all executors
> 2015-11-26,03:05:16,525 INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
> 2015-11-26,03:05:16,791 INFO org.apache.spark.deploy.yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. XX.XX.XX.XX:38734
> 2015-11-26,03:05:16,847 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(164,WrappedArray())
> 2015-11-26,03:05:27,242 INFO org.apache.spark.deploy.yarn.YarnAllocator: Will request 13 executor containers, each with 1 cores and 4608 MB memory including 1024 MB overhead
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org