You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 05:37:28 UTC
[jira] [Resolved] (SPARK-6183) Skip bad workers when re-launching
executors
[ https://issues.apache.org/jira/browse/SPARK-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-6183.
---------------------------------
Resolution: Incomplete
> Skip bad workers when re-launching executors
> --------------------------------------------
>
> Key: SPARK-6183
> URL: https://issues.apache.org/jira/browse/SPARK-6183
> Project: Spark
> Issue Type: Improvement
> Components: Deploy
> Reporter: Zhen Peng
> Priority: Major
> Labels: bulk-closed
>
> In standalone cluster, when an executor launch fails, the master should avoid re-launching it on the same worker.
> According to the current scheduling logic, the failed executor will be highly possible re-launched on the same worker, and finally cause the application removed from the master.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org