You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Muler <mu...@gmail.com> on 2015/08/31 10:06:06 UTC

Standalone mode: is SPARK_WORKER_MEMORY per SPARK_WORKER_INSTANCE?

Hi,

Is SPARK_WORKER_MEMORY defined per SPARK_WORKER_INSTANCE (just like
SPARK_WORKER_CORES, the documentation is clear for cores)? Or is it a per
node?

For example, if I have 80g of available memory and I start my Spark app
with SPARK_WORKER_INSTANCES=8 (i.e. 8 worker jvms), then does
SPARK_WORKER_MEMORY represent the memory that each of these 8 worker JVMs
can give out to their executor JVMs? If so, then the appropriate value
would be SPARK_WORKER_MEMORY=80g/8=10g?

Thanks,

Re: Standalone mode: is SPARK_WORKER_MEMORY per SPARK_WORKER_INSTANCE?

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
Its for each worker instance i guess. So you should be giving
SPARK_WORKER_MEMORY=10g to utilize your 80g on 8 worker instances.

Thanks
Best Regards

On Mon, Aug 31, 2015 at 1:36 PM, Muler <mu...@gmail.com> wrote:

> Hi,
>
> Is SPARK_WORKER_MEMORY defined per SPARK_WORKER_INSTANCE (just like
> SPARK_WORKER_CORES, the documentation is clear for cores)? Or is it a per
> node?
>
> For example, if I have 80g of available memory and I start my Spark app
> with SPARK_WORKER_INSTANCES=8 (i.e. 8 worker jvms), then does
> SPARK_WORKER_MEMORY represent the memory that each of these 8 worker JVMs
> can give out to their executor JVMs? If so, then the appropriate value
> would be SPARK_WORKER_MEMORY=80g/8=10g?
>
> Thanks,
>