You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pala M Muthaia <mc...@rocketfuelinc.com> on 2014/12/16 01:53:21 UTC

Executor memory

Hi,

Running Spark 1.0.1 on Yarn 2.5

When i specify --executor-memory 4g, the spark UI shows each executor as
having only 2.3 GB, and similarly for 8g, only 4.6 GB.

I am guessing that the executor memory corresponds to the container memory,
and that the task JVM gets only a percentage of the container total memory.
Is there a yarn or spark parameter to tune this so that my task JVM
actually gets 6GB out of the 8GB for example?


Thanks.

Re: Executor memory

Posted by Pala M Muthaia <mc...@rocketfuelinc.com>.
Thanks for the clarifications. I misunderstood what the number on UI meant.

On Mon, Dec 15, 2014 at 7:00 PM, Sean Owen <so...@cloudera.com> wrote:

> I believe this corresponds to the 0.6 of the whole heap that is
> allocated for caching partitions. See spark.storage.memoryFraction on
> http://spark.apache.org/docs/latest/configuration.html 0.6 of 4GB is
> about 2.3GB.
>
> The note there is important, that you probably don't want to exceed
> the JVM old generation size with this parameter.
>
> On Tue, Dec 16, 2014 at 12:53 AM, Pala M Muthaia
> <mc...@rocketfuelinc.com> wrote:
> > Hi,
> >
> > Running Spark 1.0.1 on Yarn 2.5
> >
> > When i specify --executor-memory 4g, the spark UI shows each executor as
> > having only 2.3 GB, and similarly for 8g, only 4.6 GB.
> >
> > I am guessing that the executor memory corresponds to the container
> memory,
> > and that the task JVM gets only a percentage of the container total
> memory.
> > Is there a yarn or spark parameter to tune this so that my task JVM
> actually
> > gets 6GB out of the 8GB for example?
> >
> >
> > Thanks.
> >
> >
>

Re: Executor memory

Posted by Sean Owen <so...@cloudera.com>.
I believe this corresponds to the 0.6 of the whole heap that is
allocated for caching partitions. See spark.storage.memoryFraction on
http://spark.apache.org/docs/latest/configuration.html 0.6 of 4GB is
about 2.3GB.

The note there is important, that you probably don't want to exceed
the JVM old generation size with this parameter.

On Tue, Dec 16, 2014 at 12:53 AM, Pala M Muthaia
<mc...@rocketfuelinc.com> wrote:
> Hi,
>
> Running Spark 1.0.1 on Yarn 2.5
>
> When i specify --executor-memory 4g, the spark UI shows each executor as
> having only 2.3 GB, and similarly for 8g, only 4.6 GB.
>
> I am guessing that the executor memory corresponds to the container memory,
> and that the task JVM gets only a percentage of the container total memory.
> Is there a yarn or spark parameter to tune this so that my task JVM actually
> gets 6GB out of the 8GB for example?
>
>
> Thanks.
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Executor memory

Posted by sa...@cloudera.com.
Hi Pala,

Spark executors only reserve spark.storage.memoryFraction (default 0.6) of their spark.executor.memory for caching RDDs. The spark UI displays this fraction.

spark.executor.memory controls the executor heap size.  spark.yarn.executor.memoryOverhead controls the extra that's tacked on for the container memory.

-Sandy

> On Dec 15, 2014, at 7:53 PM, Pala M Muthaia <mc...@rocketfuelinc.com> wrote:
> 
> Hi,
> 
> Running Spark 1.0.1 on Yarn 2.5
> 
> When i specify --executor-memory 4g, the spark UI shows each executor as having only 2.3 GB, and similarly for 8g, only 4.6 GB. 
> 
> I am guessing that the executor memory corresponds to the container memory, and that the task JVM gets only a percentage of the container total memory. Is there a yarn or spark parameter to tune this so that my task JVM actually gets 6GB out of the 8GB for example?
> 
> 
> Thanks.
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org