You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ashok Kumar <as...@yahoo.com.INVALID> on 2017/10/30 19:16:11 UTC

The parameter spark.yarn.executor.memoryOverhead

Hi Gurus,

The parameter spark.yarn.executor.memoryOverhead is explained as below:

spark.yarn.executor.memoryOverhead 
executorMemory * 0.10, with minimum of 384
The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).                                                                                                                                                                                                                  

So does that mean that for executor of 10GB this should be ideally set to ~ 10% = 1GB?                                                                                                                                                                                         
What would happen if we set it higher to say 30% ~ 3GB.
What is this memory is exactly used for (as opposed to memory allocated to the executor)?

Thanking you