You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/08 05:42:12 UTC
[jira] [Resolved] (SPARK-20869) Master should clear failed apps
when worker down
[ https://issues.apache.org/jira/browse/SPARK-20869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-20869.
----------------------------------
Resolution: Incomplete
> Master should clear failed apps when worker down
> ------------------------------------------------
>
> Key: SPARK-20869
> URL: https://issues.apache.org/jira/browse/SPARK-20869
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.3.0
> Reporter: Li Yichao
> Priority: Minor
> Labels: bulk-closed
> Original Estimate: 2h
> Remaining Estimate: 2h
>
> In `Master.removeWorker`, master clears executor and driver state, but does not clear app state. App state is cleared when received `UnregisterApplication` and when `onDisconnect`, the first is when driver shutdown gracefully, the second is called when `netty`'s `channelInActive` is called (which is called when channel is closed), both of which can not handle the case when there is a network partition between master and worker.
> Follow the steps in [SPARK-19900|https://issues.apache.org/jira/browse/SPARK-19900], and see the [screenshots|https://cloud.githubusercontent.com/assets/2576762/26398697/d50735a4-40ac-11e7-80d8-6e9e1cf0b62f.png] when worker1 partitions with master, the app `app-xxx-000` is still running instead of finished because of worker1 is down.
> cc [~CodingCat]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org