You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Noorul Islam K M <no...@noorul.com> on 2016/07/13 12:08:31 UTC

When worker is killed driver continues to run causing issues in supervise mode

Spark version: 1.6.1
Cluster Manager: Standalone

I am experimenting with cluster mode deployment along with supervise for
high availability of streaming applications.

1. Submit a streaming job in cluster mode with supervise
2. Say that driver is scheduled on worker1. The app started
   successfully.
3. Kill worker1 java process. This does not kill driver process and
   hence the application (context) is still alive.
4. Because of supervise flag, driver gets scheduled to new worker
   worker2 and hence a new context is created, making it a duplicate.

I think this seems to be a bug.

Regards,
Noorul

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: When worker is killed driver continues to run causing issues in supervise mode

Posted by Noorul Islam Kamal Malmiyoda <no...@noorul.com>.
Adding dev list
On Jul 13, 2016 5:38 PM, "Noorul Islam K M" <no...@noorul.com> wrote:

>
> Spark version: 1.6.1
> Cluster Manager: Standalone
>
> I am experimenting with cluster mode deployment along with supervise for
> high availability of streaming applications.
>
> 1. Submit a streaming job in cluster mode with supervise
> 2. Say that driver is scheduled on worker1. The app started
>    successfully.
> 3. Kill worker1 java process. This does not kill driver process and
>    hence the application (context) is still alive.
> 4. Because of supervise flag, driver gets scheduled to new worker
>    worker2 and hence a new context is created, making it a duplicate.
>
> I think this seems to be a bug.
>
> Regards,
> Noorul
>

Re: When worker is killed driver continues to run causing issues in supervise mode

Posted by Noorul Islam Kamal Malmiyoda <no...@noorul.com>.
Adding dev list
On Jul 13, 2016 5:38 PM, "Noorul Islam K M" <no...@noorul.com> wrote:

>
> Spark version: 1.6.1
> Cluster Manager: Standalone
>
> I am experimenting with cluster mode deployment along with supervise for
> high availability of streaming applications.
>
> 1. Submit a streaming job in cluster mode with supervise
> 2. Say that driver is scheduled on worker1. The app started
>    successfully.
> 3. Kill worker1 java process. This does not kill driver process and
>    hence the application (context) is still alive.
> 4. Because of supervise flag, driver gets scheduled to new worker
>    worker2 and hence a new context is created, making it a duplicate.
>
> I think this seems to be a bug.
>
> Regards,
> Noorul
>