You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Dominik Pospisil <dp...@redhat.com> on 2007/02/22 12:17:16 UTC
org.apache.catalina.util.InstanceSupport.fireInstanceEvent
Hello,
I am trying to configure Tomcat to handle very large number (5000) of
simultaneous client connections via HTTP with keepalive. All the clients
produce equal constant load.
The problem is that before I reach maximum CPU load, everythings works fine,
clients are served equally with very low response times (<50 ms). But as I
pass full server saturation, one should expect that the server will still be
able to serve all the clients with accordingly higher response times. But it
looks that it is not the case. When I increase number of clients, nearly
constatnt number of clients remains served with low response times and the
rest of the clients are not served at all, connections are stalled.
I am using 5.5 now, but i checked 6.0 sources and it seems that implementation
is the same.
I did tomcat thread dump and found that:
about 1/3 of threads are waiting in:
"http-10.68.1.19-8080-3077" daemon prio=1 tid=0xa99420a0
nid=0x29c6 runnable [0x45107000..0x45107e30]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
which is perfect, but about 2/3 of thread are waiting for single lock in:
"http-10.68.1.19-8080-3078" daemon prio=1 tid=0xa9942e60
nid=0x29c7 waiting for monitor entry [0x45086000..0x45086eb0]
at
org.apache.catalina.util.InstanceSupport.fireInstanceEvent(InstanceSupport.java:180)
- waiting to lock <0xb2c337a8> (a [Lorg.apache.catalina.InstanceListener;)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:187)
So I am thinking if this could not be the problem. Is is necessary to be this
implementation synchronized?
Thanks for any comments,
Dominik
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: org.apache.catalina.util.InstanceSupport.fireInstanceEvent
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Dominik Pospisil wrote:
>> Dominik Pospisil wrote:
>>
>>> So I am thinking if this could not be the problem. Is is necessary to be
>>> this implementation synchronized?
>>>
>> Given the implementation, you are not supposed to be using instance
>> listeners at this time except for debugging purposes.
>>
>> Since InstanceSupport is array based, I don't see the point of the
>> sync+clone which happens in there.
>>
>> Rémy
>>
>
> Rémy,
>
> thanks for fixing that issue. In my test I can see ~30% improvement in number
> of concurrent sessions correctly served by single Tomcat instance. But still
> it did not solved it completely.
>
> My idea is, that if there are enough memory and IO resources, Tomcat will be
> able to handle all the clients "equally". So if there are clients which
> produce equal constant load they should be served all and with the same
> average response times. Do you think that it is achievable?
>
> The question is what should "equally" exactly mean. I know that in real
> scenario there are various clients with different connections and injection
> rates so just a simple FIFO rule would not be sufficient. Moreover, I am new
> to Tomcat internals and at this point I have no idea of how it should work at
> all.
>
> But what about general idea of having some scheduler which will somehow
> control thread execution? Is it good idea or something completely wrong?
>
there is no scheduler in tomcat, but the same logic has been implemented.
For example, connections are simply handled in the order they are
accepted, for the blocking connector.
On the regular connector, the notion of a scheduler is obsolete since
you will never have more connections than you have threads,
so its up to the operating system's scheduler to determine how threads
are swapped onto the CPU.
For the Tomcat APR connector, the same as above, when a socket is
accepted, the acceptor blocks until a thread is available to process the
request.
For keep alive connections, when invoking the poll() events, the
connections will be handled in that exact order.
The APR poller blocks for an available worker thread.
The Tomcat 6 NIO connector works a little bit differently. Upon
accepting a connection, the NIO acceptor doesn't block for an available
worker thread,
instead it registers it with the poller. The idea behind this is, when a
socket is accepted, doesn't mean there is data to be read, hence no need
to block a thread.
Also, the NIO poller never blocks, if there is a socket ready for read,
but no worker threads are available, the poller simply processes other
events, and when
a thread is available, the socket gets handed off at that time.
So in these three scenarios, the only implementation that doesn't follow
a strict first-come-first-serve scenario is the NIO implementation.
in a recent test, with 12k concurrent connections, it seems that the
equality is very good, example results below.
Let me know if you have any more questions.
Filip
Server Software: Apache-Coyote/1.1
Server Hostname: testhost
Server Port: 8080
Document Path: /load/bd?size=64
Document Length: 65536 bytes
Concurrency Level: 12000
Time taken for tests: 290.520087 seconds
Complete requests: 1200000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 1200000
Total transferred: 78958306058 bytes
HTML transferred: 78771832843 bytes
Requests per second: 4130.52 [#/sec] (mean)
Time per request: 2905.201 [ms] (mean)
Time per request: 0.242 [ms] (mean, across all concurrent requests)
Transfer rate: 265412.72 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 80 1342.3 0 93017
Processing: 523 2804 955.4 2510 9082
Waiting: 161 2382 957.0 2099 8186
Total: 523 2884 1739.6 2513 100406
Percentage of the requests served within a certain time (ms)
50% 2513
66% 2693
75% 2825
80% 3659
90% 4521
95% 4946
98% 5325
99% 6050
100% 100406 (longest request)
> Thanks for any comments,
>
> Dominik
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: org.apache.catalina.util.InstanceSupport.fireInstanceEvent
Posted by Dominik Pospisil <dp...@redhat.com>.
> Dominik Pospisil wrote:
> > So I am thinking if this could not be the problem. Is is necessary to be
> > this implementation synchronized?
>
> Given the implementation, you are not supposed to be using instance
> listeners at this time except for debugging purposes.
>
> Since InstanceSupport is array based, I don't see the point of the
> sync+clone which happens in there.
>
> Rémy
Rémy,
thanks for fixing that issue. In my test I can see ~30% improvement in number
of concurrent sessions correctly served by single Tomcat instance. But still
it did not solved it completely.
My idea is, that if there are enough memory and IO resources, Tomcat will be
able to handle all the clients "equally". So if there are clients which
produce equal constant load they should be served all and with the same
average response times. Do you think that it is achievable?
The question is what should "equally" exactly mean. I know that in real
scenario there are various clients with different connections and injection
rates so just a simple FIFO rule would not be sufficient. Moreover, I am new
to Tomcat internals and at this point I have no idea of how it should work at
all.
But what about general idea of having some scheduler which will somehow
control thread execution? Is it good idea or something completely wrong?
Thanks for any comments,
Dominik
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: org.apache.catalina.util.InstanceSupport.fireInstanceEvent
Posted by Remy Maucherat <re...@apache.org>.
Dominik Pospisil wrote:
> So I am thinking if this could not be the problem. Is is necessary to be this
> implementation synchronized?
Given the implementation, you are not supposed to be using instance
listeners at this time except for debugging purposes.
Since InstanceSupport is array based, I don't see the point of the
sync+clone which happens in there.
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org