You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sai Prasanna <an...@gmail.com> on 2014/02/17 06:32:45 UTC

Mesos Scheduler

Hi Everybody,

I am trying to come up with the possible reasoning between "Resource
Offers" way of designing Mesos rather than "Resource Request" type of that
of YARN. What could be the potential advantage/disadvantage ???

Can you throw more light wrt to implementation. Say F1 and F2 are two
frameworks running on top of Mesos and R1,R2, ... R10 are the resources
available. Will Mesos offer R1...R10 for both F1 and F2 simultaneously or
does it show 5-5 each. If all resources are shown to F1 & F2, if both of
them request for the same resource say, is the tie broken arbitrarily or
how???

Awaiting your suggestions guys !!!



-- 
*Sai Prasanna. AN*
*II M.Tech (CS), SSSIHL*


*Entire water in the ocean can never sink a ship, Unless it gets inside.All
the pressures of life can never hurt you, Unless you let them in.*

Re: Mesos Scheduler

Posted by deric <ba...@gmail.com>.
Hi Sai,

in this simple case Mesos will offer pretty much 50% for each framework.
Let's say that R1-R5 will be offered to F1 and R6-R10 will be offered to F2.
When resource is offered to a framework it won't be offered to other
framework unless the offer is rejected (or the after certain timeout it will
be offered to other framework). If F2 will reject all offers, R6-R10 will be
offered to F1.

Mesos is using Dominant Resource Fairness algorithm, which tries fair
allocation of different types of resources (CPU and RAM). So F1 might get
e.g. 80% of CPUs and F2 80% of RAM (if frameworks have different dominant
resource). If you want to allocate 100% of CPU and 100% of RAM it might
never happen, that is a sort of disadvantage. You should write your
frameworks in a way that it will be able to run on smaller portion of
resources.

Tomas





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Mesos-Scheduler-tp1603p2344.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.