You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2020/08/18 14:09:00 UTC

[jira] [Commented] (SPARK-32651) decommission switch configuration should have the highest hierarchy

    [ https://issues.apache.org/jira/browse/SPARK-32651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17179641#comment-17179641 ] 

Apache Spark commented on SPARK-32651:
--------------------------------------

User 'Ngone51' has created a pull request for this issue:
https://github.com/apache/spark/pull/29466

> decommission switch configuration should have the highest hierarchy
> -------------------------------------------------------------------
>
>                 Key: SPARK-32651
>                 URL: https://issues.apache.org/jira/browse/SPARK-32651
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: wuyi
>            Assignee: Apache Spark
>            Priority: Major
>
> Decommission has been supported in Standalone and k8s yet and will be supported in Yarn in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker(spark.worker.decommission.enabled).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org