You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Pascal GILLET (JIRA)" <ji...@apache.org> on 2018/02/28 16:55:00 UTC

[jira] [Comment Edited] (SPARK-23499) Mesos Cluster Dispatcher should support priority queues to submit drivers

    [ https://issues.apache.org/jira/browse/SPARK-23499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16380652#comment-16380652 ] 

Pascal GILLET edited comment on SPARK-23499 at 2/28/18 4:54 PM:
----------------------------------------------------------------

Below a screenshot of the MesosClusterDispatcher UI showing Spark jobs along with the queue to which they are submitted:

 

!Screenshot from 2018-02-28 17-22-47.png!

 

 


was (Author: pgillet):
Below a screenshot of the MesosClusterDispatcher UI showing the Spark jobs along with the queue to which they are submitted:

 

!Screenshot from 2018-02-28 17-22-47.png!

 

 

> Mesos Cluster Dispatcher should support priority queues to submit drivers
> -------------------------------------------------------------------------
>
>                 Key: SPARK-23499
>                 URL: https://issues.apache.org/jira/browse/SPARK-23499
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>    Affects Versions: 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0
>            Reporter: Pascal GILLET
>            Priority: Major
>             Fix For: 2.4.0
>
>         Attachments: Screenshot from 2018-02-28 17-22-47.png
>
>
> As for Yarn, Mesos users should be able to specify priority queues to define a workload management policy for queued drivers in the Mesos Cluster Dispatcher.
> Submitted drivers are *currently* kept in order of their submission: the first driver added to the queue will be the first one to be executed (FIFO).
> Each driver could have a "priority" associated with it. A driver with high priority is served (Mesos resources) before a driver with low priority. If two drivers have the same priority, they are served according to their submit date in the queue.
> To set up such priority queues, the following changes are proposed:
>  * The Mesos Cluster Dispatcher can optionally be configured with the _spark.mesos.dispatcher.queue.[QueueName]_ property. This property takes a float as value. This adds a new queue named _QueueName_ for submitted drivers with the specified priority.
>  Higher numbers indicate higher priority.
>  The user can then specify multiple queues.
>  * A driver can be submitted to a specific queue with _spark.mesos.dispatcher.queue_. This property takes the name of a queue previously declared in the dispatcher as value.
> By default, the dispatcher has a single "default" queue with 0.0 priority (cannot be overridden). If none of the properties above are specified, the behavior is the same as the current one (i.e. simple FIFO).
> Additionaly, it is possible to implement a consistent and overall workload management policy throughout the lifecycle of drivers by mapping these priority queues to weighted Mesos roles if any (i.e. from the QUEUED state in the dispatcher to the final states in the Mesos cluster), and by specifying a _spark.mesos.role_ along with a _spark.mesos.dispatcher.queue_ when submitting an application.
> For example, with the URGENT Mesos role:
>  # Conf on the dispatcher side
>  spark.mesos.dispatcher.queue.URGENT=1.0
>  # Conf on the driver side
>  spark.mesos.dispatcher.queue=URGENT
>  spark.mesos.role=URGENT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org