You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (Jira)" <ji...@apache.org> on 2021/02/09 01:32:00 UTC

[jira] [Assigned] (SPARK-32736) Avoid caching the removed decommissioned executors in TaskSchedulerImpl

     [ https://issues.apache.org/jira/browse/SPARK-32736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yuming Wang reassigned SPARK-32736:
-----------------------------------

    Assignee: wuyi  (was: Apache Spark)

> Avoid caching the removed decommissioned executors in TaskSchedulerImpl
> -----------------------------------------------------------------------
>
>                 Key: SPARK-32736
>                 URL: https://issues.apache.org/jira/browse/SPARK-32736
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: wuyi
>            Assignee: wuyi
>            Priority: Major
>             Fix For: 3.1.0
>
>
> We can save the host directly in the ExecutorDecommissionState. Therefore, when the executor lost, we could unregister the shuffle map status on the host. Thus, we don't need to hold the cache to wait for FetchFailureException to do the unregister.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org