You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "t oo (JIRA)" <ji...@apache.org> on 2019/06/12 14:46:00 UTC

[jira] [Updated] (SPARK-27750) Standalone scheduler - ability to prioritize applications over drivers, many drivers act like Denial of Service

     [ https://issues.apache.org/jira/browse/SPARK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

t oo updated SPARK-27750:
-------------------------
    Description: 
If I submit 1000 spark submit drivers then they consume all the cores on my cluster (essentially it acts like a Denial of Service) and no spark 'application' gets to run since the cores are all consumed by the 'drivers'. This feature is about having the ability to prioritize applications over drivers so that at least some 'applications' can start running. I guess it would be like: If (driver.state = 'submitted' and (exists some app.state = 'submitted')) then set app.state = 'running'

if all apps have app.state = 'running' then set driver.state = 'submitted' 

 

Secondary to this, why must a driver consume a minimum of 1 entire core?

  was:
If I submit 1000 spark submit drivers then they consume all the cores on my cluster (essentially it acts like a Denial of Service) and no spark 'application' gets to run since the cores are all consumed by the 'drivers'. This feature is about having the ability to prioritize applications over drivers so that at least some 'applications' can start running. I guess it would be like: If (driver.state = 'submitted' and (exists some app.state = 'submitted')) then set app.state = 'running'

if all apps have app.state = 'running' then set driver.state = 'submitted' 


> Standalone scheduler - ability to prioritize applications over drivers, many drivers act like Denial of Service
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27750
>                 URL: https://issues.apache.org/jira/browse/SPARK-27750
>             Project: Spark
>          Issue Type: New Feature
>          Components: Scheduler
>    Affects Versions: 2.3.3, 2.4.3
>            Reporter: t oo
>            Priority: Minor
>
> If I submit 1000 spark submit drivers then they consume all the cores on my cluster (essentially it acts like a Denial of Service) and no spark 'application' gets to run since the cores are all consumed by the 'drivers'. This feature is about having the ability to prioritize applications over drivers so that at least some 'applications' can start running. I guess it would be like: If (driver.state = 'submitted' and (exists some app.state = 'submitted')) then set app.state = 'running'
> if all apps have app.state = 'running' then set driver.state = 'submitted' 
>  
> Secondary to this, why must a driver consume a minimum of 1 entire core?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org