You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "zhuml (Jira)" <ji...@apache.org> on 2022/10/26 08:41:00 UTC

[jira] [Commented] (SPARK-39742) Request executor after kill executor, the number of executors is not as expected

    [ https://issues.apache.org/jira/browse/SPARK-39742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17624302#comment-17624302 ] 

zhuml commented on SPARK-39742:
-------------------------------

To imitate the logic of dynamic resources, first call sparkContext.requesttalExecutors() to initialize or adjust requestedTotalExecutorsPerResourceProfile, and then call sparkContext.killExecutor() to actively adjust resources.

> Request executor after kill executor, the number of executors is not as expected
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-39742
>                 URL: https://issues.apache.org/jira/browse/SPARK-39742
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 3.2.1
>            Reporter: zhuml
>            Priority: Major
>
> I used the killExecutors and requestExecutors function of SparkContext to dynamically adjust the resources, and found that the requestExecutors after killExecutors could not achieve the expected results.
> Add unit tests in StandaloneDynamicAllocationSuite.scala 
> {code:java}
> test("kill executors first and then request") {
>     sc = new SparkContext(appConf
>       .set(config.EXECUTOR_CORES, 2)
>       .set(config.CORES_MAX, 8))
>     val appId = sc.applicationId
>     eventually(timeout(10.seconds), interval(10.millis)) {
>       val apps = getApplications()
>       assert(apps.size === 1)
>       assert(apps.head.id === appId)
>       assert(apps.head.executors.size === 4) // 8 cores total
>       assert(apps.head.getExecutorLimit === Int.MaxValue)
>     }
>     // sync executors between the Master and the driver, needed because
>     // the driver refuses to kill executors it does not know about
>     syncExecutors(sc)
>     val executors = getExecutorIds(sc)
>     assert(executors.size === 4)
>     // kill 2 executors
>     assert(sc.killExecutors(executors.take(3)))
>     val apps = getApplications()
>     assert(apps.head.executors.size === 1)
>     // add 2 executors
>     assert(sc.requestExecutors(3))
>     assert(apps.head.executors.size === 4)
>   } {code}
> 3 did not equal 4
> Expected :4
> Actual   :3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org