You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Gerard Maas <ge...@gmail.com> on 2014/12/09 15:20:02 UTC

Specifying number of executors in Mesos

Hi,

We've a number of Spark Streaming /Kafka jobs that would benefit of an even
spread of consumers over physical hosts in order to maximize network usage.
As far as I can see, the Spark Mesos scheduler accepts resource offers
until all required Mem + CPU allocation has been satisfied.

This basic resource allocation policy results in large executors spread
over few nodes, resulting in many Kafka consumers in a single node (e.g.
from 12 consumers, I've seen allocations of 7/3/2)

Is there a way to tune this behavior to achieve executor allocation on a
given number of hosts?

-kr, Gerard.

Re: Specifying number of executors in Mesos

Posted by Andrew Ash <an...@andrewash.com>.
Gerard,

Are you familiar with spark.deploy.spreadOut
<http://spark.apache.org/docs/latest/spark-standalone.html> in Standalone
mode?  It sounds like you want the same thing in Mesos mode.

On Thu, Dec 11, 2014 at 6:48 AM, Tathagata Das <ta...@gmail.com>
wrote:

> Not that I am aware of. Spark will try to spread the tasks evenly
> across executors, its not aware of the workers at all. So if the
> executors to worker allocation is uneven, I am not sure what can be
> done. Maybe others can get smoe ideas.
>
> On Tue, Dec 9, 2014 at 6:20 AM, Gerard Maas <ge...@gmail.com> wrote:
> > Hi,
> >
> > We've a number of Spark Streaming /Kafka jobs that would benefit of an
> even
> > spread of consumers over physical hosts in order to maximize network
> usage.
> > As far as I can see, the Spark Mesos scheduler accepts resource offers
> until
> > all required Mem + CPU allocation has been satisfied.
> >
> > This basic resource allocation policy results in large executors spread
> over
> > few nodes, resulting in many Kafka consumers in a single node (e.g. from
> 12
> > consumers, I've seen allocations of 7/3/2)
> >
> > Is there a way to tune this behavior to achieve executor allocation on a
> > given number of hosts?
> >
> > -kr, Gerard.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Specifying number of executors in Mesos

Posted by Tathagata Das <ta...@gmail.com>.
Not that I am aware of. Spark will try to spread the tasks evenly
across executors, its not aware of the workers at all. So if the
executors to worker allocation is uneven, I am not sure what can be
done. Maybe others can get smoe ideas.

On Tue, Dec 9, 2014 at 6:20 AM, Gerard Maas <ge...@gmail.com> wrote:
> Hi,
>
> We've a number of Spark Streaming /Kafka jobs that would benefit of an even
> spread of consumers over physical hosts in order to maximize network usage.
> As far as I can see, the Spark Mesos scheduler accepts resource offers until
> all required Mem + CPU allocation has been satisfied.
>
> This basic resource allocation policy results in large executors spread over
> few nodes, resulting in many Kafka consumers in a single node (e.g. from 12
> consumers, I've seen allocations of 7/3/2)
>
> Is there a way to tune this behavior to achieve executor allocation on a
> given number of hosts?
>
> -kr, Gerard.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org