You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Pascal GILLET (JIRA)" <ji...@apache.org> on 2018/02/23 17:48:00 UTC

[jira] [Created] (SPARK-23499) Mesos Cluster Dispatcher should support priority queues to submit drivers

Pascal GILLET created SPARK-23499:
-------------------------------------

             Summary: Mesos Cluster Dispatcher should support priority queues to submit drivers
                 Key: SPARK-23499
                 URL: https://issues.apache.org/jira/browse/SPARK-23499
             Project: Spark
          Issue Type: Improvement
          Components: Mesos
    Affects Versions: 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0
            Reporter: Pascal GILLET
             Fix For: 2.4.0


As for Yarn, Mesos users should be able to specify priority queues to define a workload management policy for queued drivers in the Mesos Cluster Dispatcher.

Submitted drivers are *currently* kept in order of their submission: the first driver added to the queue will be the first one to be executed (FIFO).

Each driver could have a "priority" associated with it. A driver with high priority is served (Mesos resources) before a driver with low priority. If two drivers have the same priority, they are served according to their submit date in the queue.

To set up such priority queues, the following changes are proposed:
 * The Mesos Cluster Dispatcher can optionally be configured with the _spark.mesos.dispatcher.queue.[QueueName]_ property. This property takes a float as value. This adds a new queue named _QueueName_ for submitted drivers with the specified priority.
Higher numbers indicate higher priority.
The user can then specify multiple queues.
 * A driver can be submitted to a specific queue with _spark.mesos.dispatcher.queue_. This property takes the name of a queue previously declared in the dispatcher as value.

By default, the dispatcher has a single "default" queue with 0.0 priority (cannot be overridden). If none of the properties above are specified, the behavior is the same as the current one (i.e. simple FIFO).


Additionaly, it is possible to implement a consistent and overall workload management policy throughout the lifecycle of drivers by mapping these priority queues to weighted Mesos roles if any (i.e. from the QUEUED state in the dispatcher to the final states in the Mesos cluster), and by specifying a spark.mesos.role along with a spark.mesos.dispatcher.queue when submitting an application.


For example, with the URGENT Mesos role:
# Conf on the dispatcher side
spark.mesos.dispatcher.queue.URGENT=1.0

# Conf on the driver side
spark.mesos.dispatcher.queue=URGENT
spark.mesos.role=URGENT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org