You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by canan chen <cc...@gmail.com> on 2015/08/18 11:35:04 UTC

Why standalone mode don't allow to set num-executor ?

num-executor only works for yarn mode. In standalone mode, I have to set
the --total-executor-cores and --executor-cores. Isn't this way so
intuitive ? Any reason for that ?

Re: Why standalone mode don't allow to set num-executor ?

Posted by Andrew Or <an...@databricks.com>.
Hi Canan,

This is mainly for legacy reasons. The default behavior in standalone in
mode is that the application grabs all available resources in the cluster.
This effectively means we want one executor per worker, where each executor
grabs all the available cores and memory on that worker. In this model, it
doesn't really make sense to express number of executors, because that's
equivalent to the number of workers.

In 1.4+, however, we do support multiple executors per worker, but that's
not the default so we decided not to add support for the --num-executors
setting to avoid potential confusion.

-Andrew


2015-08-18 2:35 GMT-07:00 canan chen <cc...@gmail.com>:

> num-executor only works for yarn mode. In standalone mode, I have to set
> the --total-executor-cores and --executor-cores. Isn't this way so
> intuitive ? Any reason for that ?
>