You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "wuyi (Jira)" <ji...@apache.org> on 2020/08/29 15:11:00 UTC
[jira] [Created] (SPARK-32736) Avoid caching the removed
decommissioned executors in TaskSchedulerImpl
wuyi created SPARK-32736:
----------------------------
Summary: Avoid caching the removed decommissioned executors in TaskSchedulerImpl
Key: SPARK-32736
URL: https://issues.apache.org/jira/browse/SPARK-32736
Project: Spark
Issue Type: Improvement
Components: Spark Core
Affects Versions: 3.1.0
Reporter: wuyi
We can save the host directly in the ExecutorDecommissionState. Therefore, when the executor lost, we could unregister the shuffle map status on the host. Thus, we don't need to hold the cache to wait for FetchFailureException to do the unregister.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org