You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Holden Karau (Jira)" <ji...@apache.org> on 2022/05/02 19:08:00 UTC

[jira] [Resolved] (SPARK-34104) Allow users to specify a maximum decommissioning time

     [ https://issues.apache.org/jira/browse/SPARK-34104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Holden Karau resolved SPARK-34104.
----------------------------------
    Fix Version/s: 3.2.0
       Resolution: Fixed

> Allow users to specify a maximum decommissioning time
> -----------------------------------------------------
>
>                 Key: SPARK-34104
>                 URL: https://issues.apache.org/jira/browse/SPARK-34104
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.1.0, 3.1.1, 3.2.0
>            Reporter: Holden Karau
>            Assignee: Holden Karau
>            Priority: Major
>             Fix For: 3.2.0
>
>
> We currently have the ability for users to set the predicted time of the cluster manager or cloud provider to terminate a decommissioning executor, but for nodes where Spark it's self is triggering decommissioning we should add the ability of users to specify a maximum time we want to allow the executor to decommission.
>  
> This is important especially if we start to in more places (like with excluded executors that are found to be flaky, that may or may not be able to decommission successfully).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org