You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2020/08/29 15:47:00 UTC

[jira] [Assigned] (SPARK-32736) Avoid caching the removed decommissioned executors in TaskSchedulerImpl

     [ https://issues.apache.org/jira/browse/SPARK-32736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-32736:
------------------------------------

    Assignee: Apache Spark

> Avoid caching the removed decommissioned executors in TaskSchedulerImpl
> -----------------------------------------------------------------------
>
>                 Key: SPARK-32736
>                 URL: https://issues.apache.org/jira/browse/SPARK-32736
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: wuyi
>            Assignee: Apache Spark
>            Priority: Major
>
> We can save the host directly in the ExecutorDecommissionState. Therefore, when the executor lost, we could unregister the shuffle map status on the host. Thus, we don't need to hold the cache to wait for FetchFailureException to do the unregister.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org