You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ashwin Agate (JIRA)" <ji...@apache.org> on 2018/04/30 19:08:00 UTC

[jira] [Commented] (SPARK-23530) It's not appropriate to let the original master exit while the leader of zookeeper shutdown

    [ https://issues.apache.org/jira/browse/SPARK-23530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16458895#comment-16458895 ] 

Ashwin Agate commented on SPARK-23530:
--------------------------------------

 Can we please increase the priority of this bug since it exists in latest Spark 2.3.0 too?  We have observed this during upgrade scenario (with Spark 1.6.3), where we have to shutdown zookeeper, which has the adverse side-effect of spark master shutting down on other nodes which is not very ideal.

BTW https://issues.apache.org/jira/browse/SPARK-15544 is the similar issue which was filed for Spark 1.6.1

> It's not appropriate to let the original master exit while the leader of zookeeper shutdown
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-23530
>                 URL: https://issues.apache.org/jira/browse/SPARK-23530
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.1, 2.3.0
>            Reporter: liuxianjiao
>            Priority: Critical
>
> When the leader of zookeeper shutdown,the current method of spark is letting the master exit to revoke the leadership.However,this sacrifice a master node.According the treatment of hadoop and storm ,we should let the origin active master to be standby ,or Re-election for spark master,or any other ways to revoke leadership gracefully.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org