You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sanjeev Verma <sa...@gmail.com> on 2016/01/14 19:17:51 UTC

strange behavior in spark yarn-client mode

I am seeing a strange behaviour while running spark in yarn client mode.I
am observing this on the single node yarn cluster.in spark-default I have
configured the executors memory as 2g and started the spark shell as follows

bin/spark-shell --master yarn-client

which trigger the 2 executors on the node with 1060MB of memory, I am able
to figure out that if you wont specify the num-executors it will span 2
executors on the node by defaults.


now when i try to run again it with the

bin/spark-shell --master yarn-client --num-executors 1

now it spawn a single executors with 1060M size, I am not able to
understand why this time it executes executors with 1G+overhead not 2G what
I specified.

why I am seeing this strange behavior?

Re: strange behavior in spark yarn-client mode

Posted by Marcelo Vanzin <va...@cloudera.com>.
Please reply to the list.

The web ui does not show the total size of the executor's heap. It
shows the amount of memory available for caching data, which is, give
or take, 60% of the heap by default.

On Thu, Jan 14, 2016 at 11:03 AM, Sanjeev Verma
<sa...@gmail.com> wrote:
> I am looking into the web ui of spark application master(tab executors).
>
> On Fri, Jan 15, 2016 at 12:08 AM, Marcelo Vanzin <va...@cloudera.com>
> wrote:
>>
>> On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma
>> <sa...@gmail.com> wrote:
>> > now it spawn a single executors with 1060M size, I am not able to
>> > understand
>> > why this time it executes executors with 1G+overhead not 2G what I
>> > specified.
>>
>> Where are you looking for the memory size for the container?
>>
>> --
>> Marcelo
>
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: strange behavior in spark yarn-client mode

Posted by Marcelo Vanzin <va...@cloudera.com>.
On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma
<sa...@gmail.com> wrote:
> now it spawn a single executors with 1060M size, I am not able to understand
> why this time it executes executors with 1G+overhead not 2G what I
> specified.

Where are you looking for the memory size for the container?

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org