You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Peter Romianowski <pr...@optivo.de> on 2009/01/22 19:16:41 UTC

watch out: Hadoop and Linux kernel 2.6.27

Hi,

we just came across a very serious problem with Hadoop (and any other 
nio intense Java-application) and kernel 2.6.27.

Short story:
Increase epoll maximum_instances (/proc/sys/fs/epoll/max_user_instances) 
to prevent "Too many open files" errors regardless your ulimit -n settings.

Long story:
http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627-epoll-limits/


I just wanted to drop this note since it took us 2 days to figure it 
out... :(

Regards
Peter


Re: watch out: Hadoop and Linux kernel 2.6.27

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Thanks Peter for the heads up.

Note that the problem is more severe with JVM's use of per-thread 
selectors. https://issues.apache.org/jira/browse/HADOOP-4346 avoids 
using JVM selectos. Even with HADOOP-4346, a limit of 128 is too small.

I wish 4346 went into earlier versions of Hadoop.

Raghu.

Peter Romianowski wrote:
> Hi,
> 
> we just came across a very serious problem with Hadoop (and any other 
> nio intense Java-application) and kernel 2.6.27.
> 
> Short story:
> Increase epoll maximum_instances (/proc/sys/fs/epoll/max_user_instances) 
> to prevent "Too many open files" errors regardless your ulimit -n settings.
> 
> Long story:
> http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627-epoll-limits/ 
> 
> 
> 
> I just wanted to drop this note since it took us 2 days to figure it 
> out... :(
> 
> Regards
> Peter
>