You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Tsai Li Ming <ma...@ltsai.com> on 2014/03/28 06:48:34 UTC

Setting SPARK_MEM higher than available memory in driver

Hi,

My worker nodes have more memory than the host that I’m submitting my driver program, but it seems that SPARK_MEM is also setting the Xmx of the spark shell?

$ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f736e130000, 205634994176, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 205634994176 bytes for committing reserved memory.

I want to allocate at least 100GB of memory per executor. The allocated memory on the executor seems to depend on the -Xmx heap size of the driver?

Thanks!




Re: Setting SPARK_MEM higher than available memory in driver

Posted by Tsai Li Ming <ma...@ltsai.com>.
Thanks. Got it working.

On 28 Mar, 2014, at 2:02 pm, Aaron Davidson <il...@gmail.com> wrote:

> Assuming you're using a new enough version of Spark, you should use spark.executor.memory to set the memory for your executors, without changing the driver memory. See the docs for your version of Spark.
> 
> 
> On Thu, Mar 27, 2014 at 10:48 PM, Tsai Li Ming <ma...@ltsai.com> wrote:
> Hi,
> 
> My worker nodes have more memory than the host that I’m submitting my driver program, but it seems that SPARK_MEM is also setting the Xmx of the spark shell?
> 
> $ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
> 
> Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f736e130000, 205634994176, 0) failed; error='Cannot allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to continue.
> # Native memory allocation (malloc) failed to allocate 205634994176 bytes for committing reserved memory.
> 
> I want to allocate at least 100GB of memory per executor. The allocated memory on the executor seems to depend on the -Xmx heap size of the driver?
> 
> Thanks!
> 
> 
> 
> 


Re: Setting SPARK_MEM higher than available memory in driver

Posted by Aaron Davidson <il...@gmail.com>.
Assuming you're using a new enough version of Spark, you should use
spark.executor.memory to set the memory for your executors, without
changing the driver memory. See the docs for your version of Spark.


On Thu, Mar 27, 2014 at 10:48 PM, Tsai Li Ming <ma...@ltsai.com>wrote:

> Hi,
>
> My worker nodes have more memory than the host that I'm submitting my
> driver program, but it seems that SPARK_MEM is also setting the Xmx of the
> spark shell?
>
> $ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
>
> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x00007f736e130000, 205634994176, 0) failed;
> error='Cannot allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 205634994176 bytes
> for committing reserved memory.
>
> I want to allocate at least 100GB of memory per executor. The allocated
> memory on the executor seems to depend on the -Xmx heap size of the driver?
>
> Thanks!
>
>
>
>