You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by David Wall <d....@computer.org> on 2012/05/02 21:19:14 UTC
Tomcat 7 NIO Socket accept failed - Too many open files
I am running Tomcat 7.0.26 on Linux we received a lot of the following
exceptions during load testing:
May 2, 2012 3:04:03 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:784)
at java.lang.Thread.run(Thread.java:662)
Is there something I can tune to remove this as a problem? My NIO+SSL
connector is configured like this:
<Connector port="8443"
protocol="org.apache.coyote.http11.Http11NioProtocol" SSLEnabled="true"
maxThreads="800" scheme="https" secure="true"
acceptCount="200" connectionTimeout="4000" acceptorThreadCount="2"
keystoreFile="keys/tomcatkeys" keystorePass="VALUEREMOVED"
clientAuth="false" sslProtocol="TLS" />
During the test, we had created about 1,800 concurrent sessions, though
I think many of those were active because of the exceptions kept the
user's transaction from completing and when their session would end
normally.
Thanks,
David
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 7 NIO Socket accept failed - Too many open files
Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Filip,
On 5/2/12 5:40 PM, Filip Hanik (mailing lists) wrote:
> Ok, lsof -p <pid> (IIRC) should do the trick, it will tell all the
> handles open for that process, and you can deduce where the
> problem stems from
+1
If you have maxThreads="800" then you're already most of the way to
1024 without even counting things like stdin/stdout/stderr, all the
files the JVM keeps open for various reasons, etc.
- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk+h2m8ACgkQ9CaO5/Lv0PCTPwCgjU5NTNUXj8yWofJVsU9jQsMF
7dYAniD0ZHtfCHa4GPQqLy7Z0JPZzzPJ
=+3m8
-----END PGP SIGNATURE-----
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
RE: Tomcat 7 NIO Socket accept failed - Too many open files
Posted by "Filip Hanik (mailing lists)" <de...@hanik.com>.
Ok, lsof -p <pid> (IIRC) should do the trick, it will tell all the handles open for that process, and you can deduce where the problem stems from
> -----Original Message-----
> From: David Wall [mailto:d.wall@computer.org]
> Sent: Wednesday, May 02, 2012 2:48 PM
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 7 NIO Socket accept failed - Too many open files
>
>
>
> On 5/2/2012 12:34 PM, Pid * wrote:
> > It's an OS issue: google 'ulimit'.
> >
> >
> > p
>
> Yes, I am familiar with ulimit -Sn (it's 1024), but I suspect this could
> be a Tomcat issue somehow opening too many files and/or not releasing
> them. I had never seen this issue before we upgraded from Tomcat 5.5
> (all using BIO) to Tomcat 7.0 (all using NIO). We run on lots of
> servers, and none have shown this error before (and they are all Linux
> servers all set to 1024 for open files). But we will give it a try by
> setting to a higher number.
>
> The reason we suspect it's Tomcat is that we're getting other
> exceptions, too, those that indicate our session/request objects are not
> valid when our JSPs are running (and of course work fine when the loads
> are normal, but start to fail when we push lots of concurrent requests
> at Tomcat).
>
> David
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 7 NIO Socket accept failed - Too many open files
Posted by David Wall <d....@computer.org>.
On 5/2/2012 12:34 PM, Pid * wrote:
> It's an OS issue: google 'ulimit'.
>
>
> p
Yes, I am familiar with ulimit -Sn (it's 1024), but I suspect this could
be a Tomcat issue somehow opening too many files and/or not releasing
them. I had never seen this issue before we upgraded from Tomcat 5.5
(all using BIO) to Tomcat 7.0 (all using NIO). We run on lots of
servers, and none have shown this error before (and they are all Linux
servers all set to 1024 for open files). But we will give it a try by
setting to a higher number.
The reason we suspect it's Tomcat is that we're getting other
exceptions, too, those that indicate our session/request objects are not
valid when our JSPs are running (and of course work fine when the loads
are normal, but start to fail when we push lots of concurrent requests
at Tomcat).
David
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 7 NIO Socket accept failed - Too many open files
Posted by Pid * <pi...@pidster.com>.
On 2 May 2012, at 20:19, David Wall <d....@computer.org> wrote:
> I am running Tomcat 7.0.26 on Linux we received a lot of the following exceptions during load testing:
>
> May 2, 2012 3:04:03 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
> SEVERE: Socket accept failed
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
> at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:784)
> at java.lang.Thread.run(Thread.java:662)
>
> Is there something I can tune to remove this as a problem?
It's an OS issue: google 'ulimit'.
p
> My NIO+SSL connector is configured like this:
>
> <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" SSLEnabled="true"
> maxThreads="800" scheme="https" secure="true" acceptCount="200" connectionTimeout="4000" acceptorThreadCount="2"
> keystoreFile="keys/tomcatkeys" keystorePass="VALUEREMOVED"
> clientAuth="false" sslProtocol="TLS" />
>
> During the test, we had created about 1,800 concurrent sessions, though I think many of those were active because of the exceptions kept the user's transaction from completing and when their session would end normally.
>
> Thanks,
> David
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org