You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by jamborta <ja...@gmail.com> on 2014/09/25 17:55:15 UTC

Yarn number of containers

Hi all,

I am running spark with the default settings in yarn client mode. For some
reason yarn always allocates three containers to the application (wondering
where it is set?), and only uses two of them.

Also the cpus on the cluster never go over 50%, I turned off the fair
scheduler and set high spark.cores.max. Is there some additional settings I
am missing?

thanks,



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Yarn number of containers

Posted by jamborta <ja...@gmail.com>.
thanks.


On Thu, Sep 25, 2014 at 10:25 PM, Marcelo Vanzin [via Apache Spark
User List] <ml...@n3.nabble.com> wrote:
> From spark-submit --help:
>
>  YARN-only:
>   --executor-cores NUM        Number of cores per executor (Default: 1).
>   --queue QUEUE_NAME          The YARN queue to submit to (Default:
> "default").
>   --num-executors NUM         Number of executors to launch (Default: 2).
>   --archives ARCHIVES         Comma separated list of archives to be
> extracted into the
>                               working directory of each executor.
>
> On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor <[hidden email]> wrote:
>
>> Thank you.
>>
>> Where is the number of containers set?
>>
>> On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <[hidden email]> wrote:
>>> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <[hidden email]> wrote:
>>>> I am running spark with the default settings in yarn client mode. For
>>>> some
>>>> reason yarn always allocates three containers to the application
>>>> (wondering
>>>> where it is set?), and only uses two of them.
>>>
>>> The default number of executors in Yarn mode is 2; so you have 2
>>> executors + the application master, so 3 containers.
>>>
>>>> Also the cpus on the cluster never go over 50%, I turned off the fair
>>>> scheduler and set high spark.cores.max. Is there some additional
>>>> settings I
>>>> am missing?
>>>
>>> You probably need to request more cores (--executor-cores). Don't
>>> remember if that is respected in Yarn, but should be.
>>>
>>> --
>>> Marcelo
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>
> ________________________________
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148p15177.html
> To unsubscribe from Yarn number of containers, click here.
> NAML




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148p15191.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Yarn number of containers

Posted by Marcelo Vanzin <va...@cloudera.com>.
>From spark-submit --help:

 YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be
extracted into the
                              working directory of each executor.

On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor <ja...@gmail.com> wrote:
> Thank you.
>
> Where is the number of containers set?
>
> On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <va...@cloudera.com> wrote:
>> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <ja...@gmail.com> wrote:
>>> I am running spark with the default settings in yarn client mode. For some
>>> reason yarn always allocates three containers to the application (wondering
>>> where it is set?), and only uses two of them.
>>
>> The default number of executors in Yarn mode is 2; so you have 2
>> executors + the application master, so 3 containers.
>>
>>> Also the cpus on the cluster never go over 50%, I turned off the fair
>>> scheduler and set high spark.cores.max. Is there some additional settings I
>>> am missing?
>>
>> You probably need to request more cores (--executor-cores). Don't
>> remember if that is respected in Yarn, but should be.
>>
>> --
>> Marcelo



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Yarn number of containers

Posted by Tamas Jambor <ja...@gmail.com>.
Thank you.

Where is the number of containers set?

On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <va...@cloudera.com> wrote:
> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <ja...@gmail.com> wrote:
>> I am running spark with the default settings in yarn client mode. For some
>> reason yarn always allocates three containers to the application (wondering
>> where it is set?), and only uses two of them.
>
> The default number of executors in Yarn mode is 2; so you have 2
> executors + the application master, so 3 containers.
>
>> Also the cpus on the cluster never go over 50%, I turned off the fair
>> scheduler and set high spark.cores.max. Is there some additional settings I
>> am missing?
>
> You probably need to request more cores (--executor-cores). Don't
> remember if that is respected in Yarn, but should be.
>
> --
> Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Yarn number of containers

Posted by Marcelo Vanzin <va...@cloudera.com>.
On Thu, Sep 25, 2014 at 8:55 AM, jamborta <ja...@gmail.com> wrote:
> I am running spark with the default settings in yarn client mode. For some
> reason yarn always allocates three containers to the application (wondering
> where it is set?), and only uses two of them.

The default number of executors in Yarn mode is 2; so you have 2
executors + the application master, so 3 containers.

> Also the cpus on the cluster never go over 50%, I turned off the fair
> scheduler and set high spark.cores.max. Is there some additional settings I
> am missing?

You probably need to request more cores (--executor-cores). Don't
remember if that is respected in Yarn, but should be.

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org