You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shay Elbaz (Jira)" <ji...@apache.org> on 2022/12/08 11:50:00 UTC

[jira] [Updated] (SPARK-41449) Stage level scheduling, allow to change number of executors

     [ https://issues.apache.org/jira/browse/SPARK-41449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shay Elbaz updated SPARK-41449:
-------------------------------
    Description: 
Since the total/max number of executor is constant throughout the application - in dynamic or static allocation - there is loose control over how much GPUs will be requested from the resource manager. 

For example, if an application needs 500 executors for the ETL part (with N cores each), but it needs - *or allowed -* only 50 GPUs for the DL part, in practice it will request at least 500 GPUs from the RM, since `spark.executor.instances` is set to 500. This leads to resource management challenges in multi tenant environments.

A quick workaround is to repartition the RDD to 50 partitions just before switching resources, but it has obvious downsides. 

It would be very helpful if the total/max number of executors could be also configured in the Resource Profile.

  was:
Since the (max) number of executor is constant throughout the application - in dynamic or static allocation - there is loose control over how much GPUs will be requested from the resource manager. 

For example, if an application needs 500 executors for the ETL part (with N cores each), but it needs - *or allowed -* only 50 GPUs for the DL part, in practice it will request at least 500 GPUs from the RM, since `spark.executor.instances` is set to 500. This leads to resource management challenges in multi tenant environments.

A quick workaround is to repartition the RDD to 50 partitions, but it has obvious downsides. 

It would be very helpful if the total/max number of executors could be also configured in the Resource Profile.


> Stage level scheduling, allow to change number of executors
> -----------------------------------------------------------
>
>                 Key: SPARK-41449
>                 URL: https://issues.apache.org/jira/browse/SPARK-41449
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>    Affects Versions: 3.3.0, 3.3.1
>            Reporter: Shay Elbaz
>            Priority: Major
>              Labels: scheduler
>
> Since the total/max number of executor is constant throughout the application - in dynamic or static allocation - there is loose control over how much GPUs will be requested from the resource manager. 
> For example, if an application needs 500 executors for the ETL part (with N cores each), but it needs - *or allowed -* only 50 GPUs for the DL part, in practice it will request at least 500 GPUs from the RM, since `spark.executor.instances` is set to 500. This leads to resource management challenges in multi tenant environments.
> A quick workaround is to repartition the RDD to 50 partitions just before switching resources, but it has obvious downsides. 
> It would be very helpful if the total/max number of executors could be also configured in the Resource Profile.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org