You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by 麦树荣 <sh...@qunar.com> on 2015/04/07 06:01:22 UTC

答复: Yarn container out of memory when using large memory mapped file

mapreduce.reduce.memory.mb  means physical memory, not JVM heap.
The large mapped files (about 8G total) is more than 4G(mapreduce.reduce.memory.mb=4096), so you got the error.

发件人: Yao, York [mailto:york.yao@here.com]
发送时间: 2015年4月5日 6:36
收件人: user@hadoop.apache.org
主题: Yarn container out of memory when using large memory mapped file


Hello,

I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total). The reducer itself use very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly)) also use little memory (managed by OS instead of JVM).

I got error similar to this: Container [pid=26783,containerID=container_1389136889967_0009_01_000002] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 5.2 GB of 8.4 GB virtual memory used. Killing container

Here was my settings:

mapreduce.reduce.java.opts=-Xmx2048m

mapreduce.reduce.memory.mb=4096

So I adjust the parameter to this and works:

mapreduce.reduce.java.opts=-Xmx10240m

mapreduce.reduce.memory.mb=12288

I further adjust the parameters and get it work like this:

mapreduce.reduce.java.opts=-Xmx2048m

mapreduce.reduce.memory.mb=10240

My question is: why I need the yarn container to have about 8G more memory than the JVM size? The culprit seems to be the large java memory mapped files I used (each about 1.5G, sum up to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable by multiple processes (e.g. reducers)?

Thanks!

York



安全提示:本邮件非公司内部邮件,请注意保护个人及公司信息安全,如有索取帐号密码等可疑情况请向 secteam@qunar.com❮secteam@qunar.com❯ 举报。