You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Tobias Schulz-Hess <ts...@internetconsumerservices.com> on 2008/01/25 01:26:26 UTC

Too many open files exception under heavy load - need help!

Hi there,

we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is really fast. We get lots of traffic which is usually handled well by the tomcats and the load on those machines is between 1 and 6 (when we have lots of traffic).
The machines have debian 4.1/64 as OS.

However, sometimes (especially if we have lots of traffic) we get the following exception:
INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many open files
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.socketAccept(Native Method)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.implAccept(ServerSocket.java:453)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.accept(ServerSocket.java:421)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
rSocketFactory.java:61)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.lang.Thread.run(Thread.java:619)
I

We already have altered the ulimit from 1024 (default) to 4096 (and therefore proofing: yes, I have used google and read almost everything about that exception).

We also looked into the open files and all 95% of them are from or to the Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and MySQL and SSL-Socket).

Most of the connections to port 8080 are in the CLOSE_WAIT state.

I have the strong feeling that something (tomcat, JVM, whatsoever) relies that the JVM garbage collection will kill those open connections. However, if we have heavy load, the garbage collection is suspended and then the connections pile up. But this is just a guess.

How can this problem be solved?

Thank you and kind regards,

Tobias.

-----------------------------------------------------------
Tobias Schulz-Hess
 
ICS - Internet Consumer Services GmbH
Mittelweg 162
20148 Hamburg
 
Tel: 	+49 (0) 40 238 49 141
Fax: 	+49 (0) 40 415 457 14
E-Mail: tsh@internetconsumerservices.com
Web: 	www.internetconsumerservices.com 

Projekte
www.dealjaeger.de 
www.verwandt.de

ICS Internet Consumer Services GmbH
Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
Handelsregister: Amtsgericht Hamburg HRB 95149



Re: Too many open files exception under heavy load - need help!

Posted by Rainer Jung <ra...@kippdata.de>.
Tobias Schulz-Hess wrote:
> Hi Rainer,
> 
> Rainer Jung schrieb:
>> Hi,
>>
>> 1) How many fds does the process have, so is the question "why can't
>> we use all those 4096 fds configured", or is it "Where do those 4096
>> fdsused by my process come from"?
> The latter. We can actually see the 4096 fds are used (by port 8080 in
> CLOSE_WAIT state...).
> Well, we're pretty sure that the fds actually are the connections from
> the HTTPConnector of Tomcat. The connector is set to use 200 connections
> simultaneously. So the question is: Why aren't those connections closed?...

Are you using the tcnative APR connector?

>> 2) CLOSE_WAIT means the remote side closed the connection and the
>> local side didn't yet close it. What's you remote side with respect to
>> TCP? Is it browsers, or a load balancer or stuff like that?
> We have NGINX as a proxy in front of the tomcat (on another server). So
> request from the Internet arrive at NGINX and are then forwarded to the
> tomcat(s).
> By now, we're pretty happy with NGINX, since it is really fast and has
> low footprint, but could well be that it does not work well with tomcat.
> 
> We have the problems with our live servers, so the application, which
> actually is initiating the connection is a browser.
> 
>> 3) Are you using keep alive (not implying that's the cause of your
>> problems, but keep alive makes the connection live cycle much more
>> complicated from the container point of view).
> As far as I understood NGINX, we only use keep alive request for the
> communication between client and NGINX. The communication between NGINX
> and tomcat does not have settings for keep alive, so I assume: no.
> 
> This is the relevant part of the NGINX configuration:
> 
>                 location / {
>                         proxy_pass         http://verwandt_de;
>                         proxy_redirect     off;
>        
>                         proxy_set_header   Host             $host;
>                         proxy_set_header   X-Real-IP        $remote_addr;
>                         proxy_set_header   X-Forwarded-For 
> $proxy_add_x_forwarded_for;
>        
>                         client_max_body_size       10m;
>                         client_body_temp_path     
> /var/nginx/client_body_temp;
>        
>                         proxy_buffering                 off;
>                         proxy_store                             off;
>        
>                         proxy_connect_timeout      30;
>                         proxy_send_timeout         80;
>                         proxy_read_timeout         80;
>                 }
>  
> 
> So any suggestions that I should move the topic forward to some NGINX
> mailing list?

Not sure yet. It's interesting, that there is a 30 seconds timeout in 
this config. Maybe you should investigate, what those 30 seconds.mean. 
On the other hand, 30 seconds are not that rarely used as defaults ...

What about experimenting with maxKeepAliveRequests=1 in your http 
connector (server.xml)?

> 
> Kind regards,
> 
> Tobias.
> 
>> Regards,
>> Rainer
>>
>>
>> Tobias Schulz-Hess wrote:
>>> Hi there,
>>>
>>> we use the current Tomcat 6.0 on 2 machines. The hardware is brand
>>> new and is really fast. We get lots of traffic which is usually
>>> handled well by the tomcats and the load on those machines is between
>>> 1 and 6 (when we have lots of traffic).
>>> The machines have debian 4.1/64 as OS.
>>>
>>> However, sometimes (especially if we have lots of traffic) we get the
>>> following exception:
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException:
>>> Too many open files
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> java.net.PlainSocketImpl.socketAccept(Native Method)
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> java.net.ServerSocket.implAccept(ServerSocket.java:453)
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> java.net.ServerSocket.accept(ServerSocket.java:421)
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
>>>
>>> rSocketFactory.java:61)
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
>>>
>>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>>> java.lang.Thread.run(Thread.java:619)
>>> I
>>>
>>> We already have altered the ulimit from 1024 (default) to 4096 (and
>>> therefore proofing: yes, I have used google and read almost
>>> everything about that exception).
>>>
>>> We also looked into the open files and all 95% of them are from or to
>>> the Tomcat Port 8080. (The other 5% are open JARs, connections to
>>> memcached and MySQL and SSL-Socket).
>>>
>>> Most of the connections to port 8080 are in the CLOSE_WAIT state.
>>>
>>> I have the strong feeling that something (tomcat, JVM, whatsoever)
>>> relies that the JVM garbage collection will kill those open
>>> connections. However, if we have heavy load, the garbage collection
>>> is suspended and then the connections pile up. But this is just a guess.
>>>
>>> How can this problem be solved?
>>>
>>> Thank you and kind regards,
>>>
>>> Tobias.
>>>
>>> -----------------------------------------------------------
>>> Tobias Schulz-Hess

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by Tobias Schulz-Hess <ts...@internetconsumerservices.com>.
Hi Rainer,

Rainer Jung schrieb:
> Hi,
>
> 1) How many fds does the process have, so is the question "why can't
> we use all those 4096 fds configured", or is it "Where do those 4096
> fdsused by my process come from"?
The latter. We can actually see the 4096 fds are used (by port 8080 in
CLOSE_WAIT state...).
Well, we're pretty sure that the fds actually are the connections from
the HTTPConnector of Tomcat. The connector is set to use 200 connections
simultaneously. So the question is: Why aren't those connections closed?...


>
> 2) CLOSE_WAIT means the remote side closed the connection and the
> local side didn't yet close it. What's you remote side with respect to
> TCP? Is it browsers, or a load balancer or stuff like that?
We have NGINX as a proxy in front of the tomcat (on another server). So
request from the Internet arrive at NGINX and are then forwarded to the
tomcat(s).
By now, we're pretty happy with NGINX, since it is really fast and has
low footprint, but could well be that it does not work well with tomcat.

We have the problems with our live servers, so the application, which
actually is initiating the connection is a browser.

>
> 3) Are you using keep alive (not implying that's the cause of your
> problems, but keep alive makes the connection live cycle much more
> complicated from the container point of view).
As far as I understood NGINX, we only use keep alive request for the
communication between client and NGINX. The communication between NGINX
and tomcat does not have settings for keep alive, so I assume: no.

This is the relevant part of the NGINX configuration:

                location / {
                        proxy_pass         http://verwandt_de;
                        proxy_redirect     off;
       
                        proxy_set_header   Host             $host;
                        proxy_set_header   X-Real-IP        $remote_addr;
                        proxy_set_header   X-Forwarded-For 
$proxy_add_x_forwarded_for;
       
                        client_max_body_size       10m;
                        client_body_temp_path     
/var/nginx/client_body_temp;
       
                        proxy_buffering                 off;
                        proxy_store                             off;
       
                        proxy_connect_timeout      30;
                        proxy_send_timeout         80;
                        proxy_read_timeout         80;
                }
 

So any suggestions that I should move the topic forward to some NGINX
mailing list?

Kind regards,

Tobias.

>
> Regards,
> Rainer
>
>
> Tobias Schulz-Hess wrote:
>> Hi there,
>>
>> we use the current Tomcat 6.0 on 2 machines. The hardware is brand
>> new and is really fast. We get lots of traffic which is usually
>> handled well by the tomcats and the load on those machines is between
>> 1 and 6 (when we have lots of traffic).
>> The machines have debian 4.1/64 as OS.
>>
>> However, sometimes (especially if we have lots of traffic) we get the
>> following exception:
>> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException:
>> Too many open files
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> java.net.PlainSocketImpl.socketAccept(Native Method)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> java.net.ServerSocket.implAccept(ServerSocket.java:453)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> java.net.ServerSocket.accept(ServerSocket.java:421)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
>>
>> rSocketFactory.java:61)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
>>
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at
>> java.lang.Thread.run(Thread.java:619)
>> I
>>
>> We already have altered the ulimit from 1024 (default) to 4096 (and
>> therefore proofing: yes, I have used google and read almost
>> everything about that exception).
>>
>> We also looked into the open files and all 95% of them are from or to
>> the Tomcat Port 8080. (The other 5% are open JARs, connections to
>> memcached and MySQL and SSL-Socket).
>>
>> Most of the connections to port 8080 are in the CLOSE_WAIT state.
>>
>> I have the strong feeling that something (tomcat, JVM, whatsoever)
>> relies that the JVM garbage collection will kill those open
>> connections. However, if we have heavy load, the garbage collection
>> is suspended and then the connections pile up. But this is just a guess.
>>
>> How can this problem be solved?
>>
>> Thank you and kind regards,
>>
>> Tobias.
>>
>> -----------------------------------------------------------
>> Tobias Schulz-Hess
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by Rainer Jung <ra...@kippdata.de>.
Hi,

1) How many fds does the process have, so is the question "why can't we 
use all those 4096 fds configured", or is it "Where do those 4096 
fdsused by my process come from"?

2) CLOSE_WAIT means the remote side closed the connection and the local 
side didn't yet close it. What's you remote side with respect to TCP? Is 
it browsers, or a load balancer or stuff like that?

3) Are you using keep alive (not implying that's the cause of your 
problems, but keep alive makes the connection live cycle much more 
complicated from the container point of view).

Regards,
Rainer


Tobias Schulz-Hess wrote:
> Hi there,
> 
> we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is really fast. We get lots of traffic which is usually handled well by the tomcats and the load on those machines is between 1 and 6 (when we have lots of traffic).
> The machines have debian 4.1/64 as OS.
> 
> However, sometimes (especially if we have lots of traffic) we get the following exception:
> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many open files
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.socketAccept(Native Method)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.implAccept(ServerSocket.java:453)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.accept(ServerSocket.java:421)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
> rSocketFactory.java:61)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.lang.Thread.run(Thread.java:619)
> I
> 
> We already have altered the ulimit from 1024 (default) to 4096 (and therefore proofing: yes, I have used google and read almost everything about that exception).
> 
> We also looked into the open files and all 95% of them are from or to the Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and MySQL and SSL-Socket).
> 
> Most of the connections to port 8080 are in the CLOSE_WAIT state.
> 
> I have the strong feeling that something (tomcat, JVM, whatsoever) relies that the JVM garbage collection will kill those open connections. However, if we have heavy load, the garbage collection is suspended and then the connections pile up. But this is just a guess.
> 
> How can this problem be solved?
> 
> Thank you and kind regards,
> 
> Tobias.
> 
> -----------------------------------------------------------
> Tobias Schulz-Hess

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by Rainer Traut <tr...@gmx.de>.
Tobias Schulz-Hess schrieb:
>> For Linux, this can be done dynamically by launching (fron the OS
> 
>> prompt):
> 
> 
>>  echo "16384" >/proc/sys/fs/file-max
> When I do
> ~# cat /proc/sys/fs/file-max
> 203065

This setting is a kernel limit.

> This tells me, that (at least this specific setting) is already
> sufficient...

You most likely hit shell limits.

What user runs your tomcat server?

When you have found out, go to
/etc/security/limits.conf
and adjust parameters like this (or according to your needs):

tomcat           soft    nofile          90000
tomcat           hard    nofile          90000

tomcat           soft    nproc           8192
tomcat           hard    nproc           8192


You can check these limits after relogin with your tomcat user with 
'ulimit -a'.

Rainer

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by Tobias Schulz-Hess <ts...@internetconsumerservices.com>.
Hi Bruno,

thanks for your quick reply.

Bruno Vilardo schrieb:
> What Does "uname -a" say?
:~# uname -a
Linux bruder 2.6.18-5-amd64 #1 SMP Thu May 31 23:51:05 UTC 2007 x86_64
GNU/Linux

You are probably more interested in:
~# cat /proc/version
Linux version 2.6.18-5-amd64 (Debian 2.6.18.dfsg.1-13)
(dannf@debian.org) (gcc version 4.1.2 20061115 (prerelease) (Debian
4.1.1-21)) #1 SMP Thu May 31 23:51:05 UTC 2007

>

> The kernel parameter controlling that changes from one UNIX
flavor to

> the next; generally it's named NFILES, MAXFILES or NINODE. I
usually

> tune these parameter for our Progress databases.

> For Linux, this can be done dynamically by launching (fron the OS

> prompt):

>

>  echo "16384" >/proc/sys/fs/file-max
When I do
~# cat /proc/sys/fs/file-max
203065

This tells me, that (at least this specific setting) is already
sufficient...

Some more ideas?

As this is a problem related to sockets, could it help to use the
Tomcat / Apache Native Library?


Oh, for the records, we use:
~# java -version
java version "1.6.0_03"
Java(TM) SE Runtime Environment (build 1.6.0_03-b05)
Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_03-b05, mixed mode)


Kind regards,

Tobias.
>
> On Jan 24, 2008 10:26 PM, Tobias Schulz-Hess
> <ts...@internetconsumerservices.com> wrote:
>   
>> Hi there,
>>
>> we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is really fast. We get lots of traffic which is usually handled well by the tomcats and the load on those machines is between 1 and 6 (when we have lots of traffic).
>> The machines have debian 4.1/64 as OS.
>>
>> However, sometimes (especially if we have lots of traffic) we get the following exception:
>> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many open files
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.socketAccept(Native Method)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.implAccept(ServerSocket.java:453)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.accept(ServerSocket.java:421)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
>> rSocketFactory.java:61)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
>> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.lang.Thread.run(Thread.java:619)
>> I
>>
>> We already have altered the ulimit from 1024 (default) to 4096 (and therefore proofing: yes, I have used google and read almost everything about that exception).
>>
>> We also looked into the open files and all 95% of them are from or to the Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and MySQL and SSL-Socket).
>>
>> Most of the connections to port 8080 are in the CLOSE_WAIT state.
>>
>> I have the strong feeling that something (tomcat, JVM, whatsoever) relies that the JVM garbage collection will kill those open connections. However, if we have heavy load, the garbage collection is suspended and then the connections pile up. But this is just a guess.
>>
>> How can this problem be solved?
>>
>> Thank you and kind regards,
>>
>> Tobias.
>>
>> -----------------------------------------------------------
>> Tobias Schulz-Hess
>>
>> ICS - Internet Consumer Services GmbH
>> Mittelweg 162
>> 20148 Hamburg
>>
>> Tel:    +49 (0) 40 238 49 141
>> Fax:    +49 (0) 40 415 457 14
>> E-Mail: tsh@internetconsumerservices.com
>> Web:    www.internetconsumerservices.com
>>
>> Projekte
>> www.dealjaeger.de
>> www.verwandt.de
>>
>> ICS Internet Consumer Services GmbH
>> Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
>> Handelsregister: Amtsgericht Hamburg HRB 95149
>>
>>
>>
>>     
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>   


---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by Bruno Vilardo <br...@gmail.com>.
Tobias,

You probably need to tune some kernel paramerters. I had some issues
with our application get "stuck" at some point that we needed to
restart everything. And since you said it is a brend new server, you
might have the defalt values set in there.

What Does "uname -a" say?

The kernel parameter controlling that changes from one UNIX flavor to
the next; generally it's named NFILES, MAXFILES or NINODE. I usually
tune these parameter for our Progress databases.
For Linux, this can be done dynamically by launching (fron the OS
prompt):

 echo "16384" >/proc/sys/fs/file-max

Regards,

Bruno

On Jan 24, 2008 10:26 PM, Tobias Schulz-Hess
<ts...@internetconsumerservices.com> wrote:
> Hi there,
>
> we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is really fast. We get lots of traffic which is usually handled well by the tomcats and the load on those machines is between 1 and 6 (when we have lots of traffic).
> The machines have debian 4.1/64 as OS.
>
> However, sometimes (especially if we have lots of traffic) we get the following exception:
> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many open files
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.socketAccept(Native Method)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.implAccept(ServerSocket.java:453)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.accept(ServerSocket.java:421)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
> rSocketFactory.java:61)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.lang.Thread.run(Thread.java:619)
> I
>
> We already have altered the ulimit from 1024 (default) to 4096 (and therefore proofing: yes, I have used google and read almost everything about that exception).
>
> We also looked into the open files and all 95% of them are from or to the Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and MySQL and SSL-Socket).
>
> Most of the connections to port 8080 are in the CLOSE_WAIT state.
>
> I have the strong feeling that something (tomcat, JVM, whatsoever) relies that the JVM garbage collection will kill those open connections. However, if we have heavy load, the garbage collection is suspended and then the connections pile up. But this is just a guess.
>
> How can this problem be solved?
>
> Thank you and kind regards,
>
> Tobias.
>
> -----------------------------------------------------------
> Tobias Schulz-Hess
>
> ICS - Internet Consumer Services GmbH
> Mittelweg 162
> 20148 Hamburg
>
> Tel:    +49 (0) 40 238 49 141
> Fax:    +49 (0) 40 415 457 14
> E-Mail: tsh@internetconsumerservices.com
> Web:    www.internetconsumerservices.com
>
> Projekte
> www.dealjaeger.de
> www.verwandt.de
>
> ICS Internet Consumer Services GmbH
> Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
> Handelsregister: Amtsgericht Hamburg HRB 95149
>
>
>

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Too many open files exception under heavy load - need help!

Posted by David Brown <da...@davidwbrown.name>.
IMHO, try jakarta.apache.org JMeter. Take a close look at the very rigorous load/stress testing so-called Test Plan that is a distributed load test. HTH.

Tobias Schulz-Hess wrote ..
> Hi there,
> 
> we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is really
> fast. We get lots of traffic which is usually handled well by the tomcats and the
> load on those machines is between 1 and 6 (when we have lots of traffic).
> The machines have debian 4.1/64 as OS.
> 
> However, sometimes (especially if we have lots of traffic) we get the following
> exception:
> INFO   | jvm 1    | 2008/01/23 15:28:18 | java.net.SocketException: Too many open
> files
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.socketAccept(Native
> Method)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.implAccept(ServerSocket.java:453)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.net.ServerSocket.accept(ServerSocket.java:421)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
> rSocketFactory.java:61)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
> INFO   | jvm 1    | 2008/01/23 15:28:18 |       at java.lang.Thread.run(Thread.java:619)
> I
> 
> We already have altered the ulimit from 1024 (default) to 4096 (and therefore proofing:
> yes, I have used google and read almost everything about that exception).
> 
> We also looked into the open files and all 95% of them are from or to the Tomcat
> Port 8080. (The other 5% are open JARs, connections to memcached and MySQL and
> SSL-Socket).
> 
> Most of the connections to port 8080 are in the CLOSE_WAIT state.
> 
> I have the strong feeling that something (tomcat, JVM, whatsoever) relies that
> the JVM garbage collection will kill those open connections. However, if we have
> heavy load, the garbage collection is suspended and then the connections pile up.
> But this is just a guess.
> 
> How can this problem be solved?
> 
> Thank you and kind regards,
> 
> Tobias.
> 
> -----------------------------------------------------------
> Tobias Schulz-Hess
>  
> ICS - Internet Consumer Services GmbH
> Mittelweg 162
> 20148 Hamburg
>  
> Tel: 	+49 (0) 40 238 49 141
> Fax: 	+49 (0) 40 415 457 14
> E-Mail: tsh@internetconsumerservices.com
> Web: 	www.internetconsumerservices.com 
> 
> Projekte
> www.dealjaeger.de 
> www.verwandt.de
> 
> ICS Internet Consumer Services GmbH
> Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
> Handelsregister: Amtsgericht Hamburg HRB 95149