You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by bu...@apache.org on 2003/12/31 18:06:06 UTC

DO NOT REPLY [Bug 25841] New: - Tomcat/JK thread starvation

DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25841>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25841

Tomcat/JK thread starvation

           Summary: Tomcat/JK thread starvation
           Product: Tomcat 4
           Version: 4.1.24
          Platform: Sun
        OS/Version: Other
            Status: NEW
          Severity: Critical
          Priority: Other
         Component: Connector:JK/AJP (deprecated)
        AssignedTo: tomcat-dev@jakarta.apache.org
        ReportedBy: franz@franzzemen.com


We run Tomcat 4.1.24 out of JBoss 3.2.1.  We have observed that under load, 
tomcat increasing numbers of tomcat threads are "stuck" in a jk conversaton 
with apache (thread dump below).  As max processors is approached and reached, 
this evidently results in decreasing performance for users as they have to wait 
for workers to be available.  Eventually, no response is possible when the 
number of threads in this state is reached and the servers have to be re-
started.  

The doesn't seem to appear except under load - which leaves us to believe that 
its related to max processors.  We have changed max processors from 75 to 100 
to 150.  It takes increasing amounts of concurrency to cause the problem.  
Increasing max processors infinitely is not a solution because we want to 
throttle requests through the appserver using the acceptcount/maxprocessor 
combination.

The thread dump for a typical thread in this condition probably doesn't tell 
you much because it appears to be a perfectly normal situation (until you 
realize that all the threads are "stuck" in this state:

"Thread-334" daemon prio=5 tid=0x19e33c8 nid=0x835 runnable [9ae81000..9ae81994]
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:129)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:183)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:222)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:277)
	- locked <d170f5f8> (a java.io.BufferedInputStream)
	at org.apache.jk.common.ChannelSocket.read(ChannelSocket.java:498)
	at org.apache.jk.common.ChannelSocket.receive(ChannelSocket.java:436)
	at org.apache.jk.common.ChannelSocket.processConnection
(ChannelSocket.java:551)
	at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:679)
	at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run
(ThreadPool.java:619)
	at java.lang.Thread.run(Thread.java:536)


The connector configuration is:

            <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
               port="9011" minProcessors="150" maxProcessors="150"
               enableLookups="false"
               acceptCount="50" debug="0" connectionTimeout="60000"
               useURIValidationHack="false"
               protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>

A typical apache side worker.properties configuration is:

worker.syn1.port=9011
worker.syn1.host=prodapp01.parago.com
worker.syn1.type=ajp13
worker.syn1.cachesize=150
worker.syn1.cache_timeout=600
worker.syn1.lbfactor=100

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org