You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/06/25 22:54:04 UTC

[jira] [Created] (SPARK-8643) local-cluster may not shutdown SparkContext gracefully

Yin Huai created SPARK-8643:
-------------------------------

             Summary: local-cluster may not shutdown SparkContext gracefully
                 Key: SPARK-8643
                 URL: https://issues.apache.org/jira/browse/SPARK-8643
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
            Reporter: Yin Huai


When I was debugging SPARK-8567, I found that when I was using local-cluster, at the end of an application, executors were first killed and then launched again. From the log (attached), seems the master/driver side does not know it's in the shutdown process. So, it detected executor loss and then called the worker to launch new executors.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org