You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by di...@mazumdar.demon.co.uk on 2003/05/23 01:11:41 UTC

CLOSE_WAIT problems - please help

We have recently migrated our Servlet based ecommerce web site from Oracle
Application Server to Tomcat 4.1.24. In the past three weeks, we experienced
Tomcat locking up three times. Each time, the following message was
displayed in the log files:

2003/05/21 08:06:17:269 GMT+00:00 [ERROR] ThreadPool - -All threads are
busy, waiting. Please increase maxThreads or check the servlet status150 150

The other symptom of the problem was that there were hundreds of sockets in
CLOSE_WAIT condition.

Once the system went into this state, it could no longer process requests,
and the problem had to be resolved by restarting Tomcat. We tried recycling
the HTTP server alone, but this didnot solve the problem.

Interestingly, even if one Tomcat instance went into this state, the entire
system locked up, and no more requests could be processed.

In terms of traffic, we handle about 50,000 servlet requests per day, with
average response time below 2 seconds. 90% of requests have a response time
below 1 second, only 2% hits (pdf downloads) take longer than 10 seconds.

Our peek hit rate is about 75-100 servlet requests per minute.

Our system is configured as follows:

We are using Apache HTTP Server 1.3.27 with mod_jk 1.2.0.
We have configured two load balanced instances of Tomcat 4.1.24.
Operating system is AIX 4.3.3 patch level 10.
We are using IBM JDK 1.3.1.

Each Tomcat worker is configured as follows:

worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor=50
worker.tomcat2.socket_keepalive=0
worker.tomcat2.socket_timeout=300

The Apache conf settings are as follows:

Timeout On
KeepAlive On
MaxKeepAliveRequests 50
KeepAliveTimeout 5
MinSpareServers 5
MaxSpareServers 10
StartServers 15
MaxClients 150
MaxRequestsPerChild 1000

Each Tomcat instance is configured as follows:

<Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
               port="10009" minProcessors="15" maxProcessors="150"
               enableLookups="false"
               acceptCount="10" debug="0" connectionTimeout="0"
               useURIValidationHack="false"

protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
               connectionLinger="1000" tcpNoDelay="true"
               disableUploadTimeout="true" />

We desperately need to resolve this problem because we cannot afford to have
unscheduled outages such as we have had in the past three weeks. As a
temporary work-wround, we have introduced monitoring of the system so that
when more than 50 sockets go into CLOSE_WAIT condition, we automatically
bounce the affected Tomcat instance. This is not a good solution because it
impacts existing sessions which are lost as a result of the bounce.

I would be grateful for help and advice from the Tomcat developers and power
users.

Thanks and Regards

Dibyendu Majumdar


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: CLOSE_WAIT problems - please help

Posted by joseph lam <jo...@quotepower.com>.
I'm experiencing similar problem and have seen only a few other people 
reporting that. I suspect that the coyote connector sometimes has 
problems closing and cleaning up sockets and also its own threads. What 
I can only do is to increase the Tomcat maxProcessor to the biggest 
value I can afford, so that it can last maybe a few days longer.

Joseph

dibyendu@mazumdar.demon.co.uk wrote:

>We have recently migrated our Servlet based ecommerce web site from Oracle
>Application Server to Tomcat 4.1.24. In the past three weeks, we experienced
>Tomcat locking up three times. Each time, the following message was
>displayed in the log files:
>
>2003/05/21 08:06:17:269 GMT+00:00 [ERROR] ThreadPool - -All threads are
>busy, waiting. Please increase maxThreads or check the servlet status150 150
>
>The other symptom of the problem was that there were hundreds of sockets in
>CLOSE_WAIT condition.
>
>Once the system went into this state, it could no longer process requests,
>and the problem had to be resolved by restarting Tomcat. We tried recycling
>the HTTP server alone, but this didnot solve the problem.
>
>Interestingly, even if one Tomcat instance went into this state, the entire
>system locked up, and no more requests could be processed.
>
>In terms of traffic, we handle about 50,000 servlet requests per day, with
>average response time below 2 seconds. 90% of requests have a response time
>below 1 second, only 2% hits (pdf downloads) take longer than 10 seconds.
>
>Our peek hit rate is about 75-100 servlet requests per minute.
>
>Our system is configured as follows:
>
>We are using Apache HTTP Server 1.3.27 with mod_jk 1.2.0.
>We have configured two load balanced instances of Tomcat 4.1.24.
>Operating system is AIX 4.3.3 patch level 10.
>We are using IBM JDK 1.3.1.
>
>Each Tomcat worker is configured as follows:
>
>worker.tomcat2.type=ajp13
>worker.tomcat2.lbfactor=50
>worker.tomcat2.socket_keepalive=0
>worker.tomcat2.socket_timeout=300
>
>The Apache conf settings are as follows:
>
>Timeout On
>KeepAlive On
>MaxKeepAliveRequests 50
>KeepAliveTimeout 5
>MinSpareServers 5
>MaxSpareServers 10
>StartServers 15
>MaxClients 150
>MaxRequestsPerChild 1000
>
>Each Tomcat instance is configured as follows:
>
><Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
>               port="10009" minProcessors="15" maxProcessors="150"
>               enableLookups="false"
>               acceptCount="10" debug="0" connectionTimeout="0"
>               useURIValidationHack="false"
>
>protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
>               connectionLinger="1000" tcpNoDelay="true"
>               disableUploadTimeout="true" />
>
>We desperately need to resolve this problem because we cannot afford to have
>unscheduled outages such as we have had in the past three weeks. As a
>temporary work-wround, we have introduced monitoring of the system so that
>when more than 50 sockets go into CLOSE_WAIT condition, we automatically
>bounce the affected Tomcat instance. This is not a good solution because it
>impacts existing sessions which are lost as a result of the bounce.
>
>I would be grateful for help and advice from the Tomcat developers and power
>users.
>
>Thanks and Regards
>
>Dibyendu Majumdar
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>  
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org