You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Bi Linfeng <Li...@outlook.com> on 2016/09/22 11:11:29 UTC

A Spark resource scheduling order question

Hi,
I have a Spark resource scheduling order question when I read this code:

github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/Master.scala

In function schedule(), spark start drivers first, then start executors.
I’m wondering why we schedule in this order? Will the resource be wasted if a driver has been started but no resource for its executor?
Why don’t we start executors for the drivers have already running first?

Thanks,


Linfeng