You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Yadid Ayzenberg <ya...@media.mit.edu> on 2014/01/23 21:34:05 UTC

heterogeneous cluster - problems setting spark.executor.memory

Hi Community,

Im running spark in standalone mode and In my current cluster each slave 
has 8GB of RAM.
I wanted to add one more powerful machine with 100GB of RAM as a slave 
to the cluster and encountered some difficulty.
If I don't set spark.executor.memory, all slaves will only allocate 
512MB of RAM to the job.
However, I cant set spark.executor.memory to be more than 8GB, otherwise 
my existing slaves will not be used.
It seems Spark was designed mainly for a homogeneous cluster. Can anyone 
suggest a way around this?

Thanks,

Yadid