You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Jacek Laskowski <ja...@japila.pl> on 2016/06/23 21:41:54 UTC

Does CoarseGrainedSchedulerBackend care about cores only? And disregards memory?

Hi,

After reviewing makeOffer and launchTasks in
CoarseGrainedSchedulerBackend I came to the following conclusion:

Scheduling in Spark relies on cores only (not memory), i.e. the number
of tasks Spark can run on an executor is constrained by the number of
cores available only. When submitting Spark application for execution
both -- memory and cores -- can be specified explicitly.

Would you agree? Do I miss anything important?

I was very surprised when I found it out as I thought that memory
would also have been a limiting factor.

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org