You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "wuyi (Jira)" <ji...@apache.org> on 2020/09/17 14:23:00 UTC
[jira] [Created] (SPARK-32913) Improve DecommissionInfo and
DecommissionState for different use cases
wuyi created SPARK-32913:
----------------------------
Summary: Improve DecommissionInfo and DecommissionState for different use cases
Key: SPARK-32913
URL: https://issues.apache.org/jira/browse/SPARK-32913
Project: Spark
Issue Type: Sub-task
Components: Spark Core
Affects Versions: 3.1.0
Reporter: wuyi
Basically, there are 3 decommission use cases: k8s, standalone, dynamic allocation. And they are all using the DecommissionInfo to represent their own case. But, DecommissionInfo now is not enough to tell whether the decommission is triggered at executor or not after SPARK-32850. So, it's the time to improve both DecommissionInfo and DecommissionState.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org