You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Noorul Islam K M (JIRA)" <ji...@apache.org> on 2016/07/14 04:11:20 UTC

[jira] [Created] (SPARK-16539) When worker is killed driver continues to run causing issues in supervise mode

Noorul Islam K M created SPARK-16539:
----------------------------------------

             Summary: When worker is killed driver continues to run causing issues in supervise mode
                 Key: SPARK-16539
                 URL: https://issues.apache.org/jira/browse/SPARK-16539
             Project: Spark
          Issue Type: Bug
          Components: Scheduler, Spark Core
    Affects Versions: 1.6.1
         Environment: Ubuntu 14.04
            Reporter: Noorul Islam K M


Spark version: 1.6.1
Cluster Manager: Standalone

I am experimenting with cluster mode deployment along with supervise for
high availability of streaming applications.

1. Submit a streaming job in cluster mode with supervise
2. Say that driver is scheduled on worker1. The app started
   successfully.
3. Kill worker1 java process. This does not kill driver process and
   hence the application (context) is still alive.
4. Because of supervise flag, driver gets scheduled to new worker
   worker2 and hence a new context is created, making it a duplicate.

I think this seems to be a bug.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org