You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2020/09/17 15:04:00 UTC

[jira] [Assigned] (SPARK-32913) Improve ExecutorDecommissionInfo and ExecutorDecommissionState for different use cases

     [ https://issues.apache.org/jira/browse/SPARK-32913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-32913:
------------------------------------

    Assignee:     (was: Apache Spark)

> Improve ExecutorDecommissionInfo and ExecutorDecommissionState for different use cases
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-32913
>                 URL: https://issues.apache.org/jira/browse/SPARK-32913
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: wuyi
>            Priority: Major
>
> Basically, there are 3 decommission use cases: k8s, standalone, dynamic allocation. And they are all using the DecommissionInfo to represent their own case. But, DecommissionInfo now is not enough to tell whether the decommission is triggered at executor or not after SPARK-32850. So, it's the time to improve both DecommissionInfo and DecommissionState.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org