You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Pascal GILLET (JIRA)" <ji...@apache.org> on 2017/09/26 10:18:00 UTC

[jira] [Comment Edited] (SPARK-19606) Support constraints in spark-dispatcher

    [ https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16076510#comment-16076510 ] 

Pascal GILLET edited comment on SPARK-19606 at 9/26/17 10:17 AM:
-----------------------------------------------------------------

+1 but with 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints'!

I tested the patch and it works well!
As stated originally, the 'spark.mesos.constraints' property is ignored by the Spark Dispatcher.
As a consequence, the Mesos slave where the Spark driver is running does not comply to the given Mesos constraints, but on the other hand, the Mesos constraints are well applied for the Spark executors (without the need to patch anything).

*BUT* we do not want necessarily apply the same Mesos constraints for the driver and executors.
For instance, we may need to run Spark drivers and executors on 2 exclusive types of Mesos slaves: 
- The dispatcher is given Mesos resources only for drivers
- Once a driver is launched, it becomes a Mesos framework itself and is responsible for reserving resources for its executors
- If we schedule too many jobs on a Mesos cluster through the dispatcher, the whole cluster is allocated for the drivers and there are no more resources available for the executors. A driver may be launched but it may be waiting for resources for its executors infinitely, which leads to a congestion then to a dead-lock situation.
- A solution to work around this problem is to *not* mix the drivers and executors on the same machines by passing different Mesos constraints for the driver and for executors.

The 'spark.mesos.constraints' property still apply for executors. As for the drivers, the 'spark.mesos.dispatcher.driverDefault.[PropertyName]' generic property seems ideal. By defintion, it allows to set default properties for drivers submitted through the dispatcher.

I propose to revise this patch and to use 'spark.mesos.dispatcher.driverDefault.spark.mesos.constraints' instead of 'spark.mesos.constraints'.
What do you guys think?


was (Author: pgillet):
+1
Need to run Spark drivers and executors on 2 exclusive types of Mesos slaves through a Mesos constraint.
What about the spark.mesos.dispatcher.driverDefault.spark.mesos.constraints property ?

> Support constraints in spark-dispatcher
> ---------------------------------------
>
>                 Key: SPARK-19606
>                 URL: https://issues.apache.org/jira/browse/SPARK-19606
>             Project: Spark
>          Issue Type: New Feature
>          Components: Mesos
>    Affects Versions: 2.1.0
>            Reporter: Philipp Hoffmann
>
> The `spark.mesos.constraints` configuration is ignored by the spark-dispatcher. The constraints need to be passed in the Framework information when registering with Mesos.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org