You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sea <26...@qq.com> on 2015/07/14 14:44:48 UTC

About extra memory on yarn mode

Hi all:
I have a question about why spark on yarn will need extra memory
I apply for 10 executors, executor memory 6g,  I find that it will allocate 1g more for 1 executor, totally 7g for 1 executor.
I try to set spark.yarn.executor.memoryOverhead, but it did not help.
1g for 1 executor is too much, who knows how to adjust its size?

Re: About extra memory on yarn mode

Posted by Jong Wook Kim <jo...@nyu.edu>.
executor.memory only sets the maximum heap size of executor and the JVM needs non-heap memory to store class metadata, interned strings and other native overheads coming from networking libraries, off-heap storage levels, etc. These are (of course) legitimate usage of resources and you'll have to plan your cluster's resource accordingly. If 6g is a hard limit for your cluster, try reducing executor.memory to 5g and set executor.memoryOverhead to 1g. If disk spill is working correctly it won't hurt much performance.

Jong Wook


> On Jul 14, 2015, at 21:44, Sea <26...@qq.com> wrote:
> 
> Hi all:
> I have a question about why spark on yarn will need extra memory
> I apply for 10 executors, executor memory 6g,  I find that it will allocate 1g more for 1 executor, totally 7g for 1 executor.
> I try to set spark.yarn.executor.memoryOverhead, but it did not help.
> 1g for 1 executor is too much, who knows how to adjust its size?