You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jon Chase <jo...@gmail.com> on 2014/12/19 21:57:01 UTC

Yarn not running as many executors as I'd like

Running on Amazon EMR w/Yarn and Spark 1.1.1, I have trouble getting Yarn
to use the number of executors that I specify in spark-submit:

--num-executors 2

In a cluster with two core nodes will typically only result in one executor
running at a time.  I can play with the memory settings and
num-cores-per-executor, and sometimes I can get 2 executors running at
once, but I'm not sure what the secret formula is to make this happen
consistently.

Re: Yarn not running as many executors as I'd like

Posted by Marcelo Vanzin <va...@cloudera.com>.
How many cores / memory do you have available per NodeManager, and how
many cores / memory are you requesting for your job?

Remember that in Yarn mode, Spark launches "num executors + 1"
containers. The extra container, by default, reserves 1 core and about
1g of memory (more if running in cluster mode and specifying
"--driver-memory").

On Fri, Dec 19, 2014 at 12:57 PM, Jon Chase <jo...@gmail.com> wrote:
> Running on Amazon EMR w/Yarn and Spark 1.1.1, I have trouble getting Yarn to
> use the number of executors that I specify in spark-submit:
>
> --num-executors 2
>
> In a cluster with two core nodes will typically only result in one executor
> running at a time.  I can play with the memory settings and
> num-cores-per-executor, and sometimes I can get 2 executors running at once,
> but I'm not sure what the secret formula is to make this happen
> consistently.



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org