You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (Jira)" <ji...@apache.org> on 2020/09/08 04:41:00 UTC

[jira] [Resolved] (SPARK-32736) Avoid caching the removed decommissioned executors in TaskSchedulerImpl

     [ https://issues.apache.org/jira/browse/SPARK-32736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wenchen Fan resolved SPARK-32736.
---------------------------------
    Fix Version/s: 3.1.0
       Resolution: Fixed

Issue resolved by pull request 29579
[https://github.com/apache/spark/pull/29579]

> Avoid caching the removed decommissioned executors in TaskSchedulerImpl
> -----------------------------------------------------------------------
>
>                 Key: SPARK-32736
>                 URL: https://issues.apache.org/jira/browse/SPARK-32736
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: wuyi
>            Assignee: Apache Spark
>            Priority: Major
>             Fix For: 3.1.0
>
>
> We can save the host directly in the ExecutorDecommissionState. Therefore, when the executor lost, we could unregister the shuffle map status on the host. Thus, we don't need to hold the cache to wait for FetchFailureException to do the unregister.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org