You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by praveen S <my...@gmail.com> on 2015/08/17 12:34:57 UTC

Meaning of local[2]

What does this mean in .setMaster("local[2]")

Is this applicable only for standalone Mode?

Can I do this in a cluster setup, eg:
. setMaster("<hostname:port>[2]")..

Is it number of threads per worker node?

Re: Meaning of local[2]

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
Just to add you can also look into SPARK_WORKER_INSTANCES configuration in
the spark-env.sh file.
On Aug 17, 2015 3:44 AM, "Daniel Darabos" <da...@lynxanalytics.com>
wrote:

> Hi Praveen,
>
> On Mon, Aug 17, 2015 at 12:34 PM, praveen S <my...@gmail.com> wrote:
>
>> What does this mean in .setMaster("local[2]")
>>
> Local mode (executor in the same JVM) with 2 executor threads.
>
>> Is this applicable only for standalone Mode?
>>
> It is not applicable for standalone mode, only for local.
>
>> Can I do this in a cluster setup, eg:
>> . setMaster("<hostname:port>[2]")..
>>
> No. It's faster to try than to ask a mailing list, actually. Also it's
> documented at
> http://spark.apache.org/docs/latest/submitting-applications.html#master-urls
> .
>
>> Is it number of threads per worker node?
>>
> You can control the number of total threads with
> spark-submit's --total-executor-cores parameter, if that's what you're
> looking for.
>

Re: Meaning of local[2]

Posted by Daniel Darabos <da...@lynxanalytics.com>.
Hi Praveen,

On Mon, Aug 17, 2015 at 12:34 PM, praveen S <my...@gmail.com> wrote:

> What does this mean in .setMaster("local[2]")
>
Local mode (executor in the same JVM) with 2 executor threads.

> Is this applicable only for standalone Mode?
>
It is not applicable for standalone mode, only for local.

> Can I do this in a cluster setup, eg:
> . setMaster("<hostname:port>[2]")..
>
No. It's faster to try than to ask a mailing list, actually. Also it's
documented at
http://spark.apache.org/docs/latest/submitting-applications.html#master-urls
.

> Is it number of threads per worker node?
>
You can control the number of total threads with
spark-submit's --total-executor-cores parameter, if that's what you're
looking for.