You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Annie Wang <lu...@gmail.com> on 2005/04/08 05:50:18 UTC

web application "request count" and "error count" in tomcat "manager" servlet (tomcat mbeans..)

hi,

had a question about web application "request count" and "error count"
from the tomcat "manager" servlet (which are also available via tomcat
mbeans).

initially, i was thinking that request count translates to total
number of requests.  and success count could be dervied by subtracting
error count from request count.  however, this doesn't always seem to
be the case.  it seems to be dependent on how the web application is
configured via it's web.xml file.

for instance:  if i improperly access my web application by providing
a bad url (eg http://127.0.0.1:8080/webapp/some_junk), both request
count and error count increment by one as expected.

however, if i configure my web app to always prompt for authentication
and access it w/a bad url: after giving the correct username/password,
error count is incremented correctly cause of the bad url, but request
count is NOT.  not sure if this is a tomcat mbean bug or by design..??

does anyone know the exact definition of "request count"?  is it
suppose to be total number of requests?

thanks in advance!
-annie

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Mladen Turk <mt...@apache.org>.
Costin Manolache wrote:
>>
>> Further more I don't see how can you avoid keep-alive connection
>> problems without using a thread-per-connection model.
>> The point is that with 100 keep-alive connections you will still
>> have 100 busy threads.
> 
> Why ? 100 keep alive connections doesn't mean 100 active requests,
> in real servers there are many 'keep alive' connections that are just
> waiting for the next request.
>

Where are they waiting if in blocking mode?
IIRC each is inside a separate thread.

> In all servers I know, concurrency was higher than the configured number 
> of workers  - at peak time, at least, where performance matters.
>

Sure, but this is for accepting new connections, right?

Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: How to redirect all ports to use SSL?

Posted by Mark Thomas <ma...@apache.org>.
This is a question for tomcat-user, not tomcat-dev

Mark


Donny R Rota wrote:
> I want all my Tomcat requests to go through SSL. 
> 
> I want the URLs to look like  https://this/   and not   https://this:8443
> 
> I setup tomcat, and got ssl working on 8443.
> But I cannot redirect port 80 to 8443.  I keep getting 'access denied'.
> 
> Is there a way in Tomcat to redirect all port 80 requests to SSL(8443)?
> 
> I know you can do it the other way around 8443 -> 80.
> 
> I'm just running standalone Tomcat, no Apache.
> 
> advTHANKSance!
> ...Don...


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


How to redirect all ports to use SSL?

Posted by Donny R Rota <dr...@us.ibm.com>.
I want all my Tomcat requests to go through SSL. 

I want the URLs to look like  https://this/   and not   https://this:8443

I setup tomcat, and got ssl working on 8443.
But I cannot redirect port 80 to 8443.  I keep getting 'access denied'.

Is there a way in Tomcat to redirect all port 80 requests to SSL(8443)?

I know you can do it the other way around 8443 -> 80.

I'm just running standalone Tomcat, no Apache.

advTHANKSance!
...Don...

Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Costin Manolache <cm...@yahoo.com>.
Mladen Turk wrote:
> Scott Marlow wrote:
> 
>> Hi,
>> I wonder if anyone has any feedback on a performance change that I am
>> working on making.
> 
> 
> Can you compare the performance of you code with the standard
> implementation when the concurrency is lower then maxThreads
> value?
> 
> I see no point to make patches that will deal with cases presuming
> that the concurrency is always higher then the actual number of
> worker threads available.
> 
> IMHO this is a bad design approach for the http applications,
> and NIO performance is a proof of that.
> It might help in cases where you have a very very slow clients.
> In any other case the thread context switching will kill
> the performance thought.
> 
> Further more I don't see how can you avoid keep-alive connection
> problems without using a thread-per-connection model.
> The point is that with 100 keep-alive connections you will still
> have 100 busy threads.

Why ? 100 keep alive connections doesn't mean 100 active requests,
in real servers there are many 'keep alive' connections that are just
waiting for the next request.

In all servers I know, concurrency was higher than the configured number 
of workers  - at peak time, at least, where performance matters.

Costin


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Mladen Turk <mt...@apache.org>.
Scott Marlow wrote:
> Hi, 
> 
> I wonder if anyone has any feedback on a performance change that I am
> working on making. 
>

Can you compare the performance of you code with the standard
implementation when the concurrency is lower then maxThreads
value?

I see no point to make patches that will deal with cases presuming
that the concurrency is always higher then the actual number of
worker threads available.

IMHO this is a bad design approach for the http applications,
and NIO performance is a proof of that.
It might help in cases where you have a very very slow clients.
In any other case the thread context switching will kill
the performance thought.

Further more I don't see how can you avoid keep-alive connection
problems without using a thread-per-connection model.
The point is that with 100 keep-alive connections you will still
have 100 busy threads.

Regards,
Mladen.


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Scott Marlow <sc...@gmail.com>.
On Tue, 2005-07-26 at 16:55 +0200, Remy Maucherat wrote:
> Remy Maucherat wrote:
> > Scott Marlow wrote:
> > 
> >> Anyway, my point is that this could be a worthwhile enhancement for
> >> applications that run on Tomcat.  What I don't understand yet is whether
> >> the same functionality is already in Tomcat.
> >>
> >> I should point out that some applications shouldn't limit the max number
> >> of concurrent requests (long running requests won't benefit but maybe
> >> those applications shouldn't run on the web tier anyway :-)
> > 
> > I agree with the intent, but this is not implemented properly. I think
> > the idea is to restrict concurrency in the application layer, rather at
> > the low level (where, AFIK, concurrency isn't that expensive, and is
> > better addressed using a little non blocking IO). The performance
> > benefits for certain types of applications will be the same, but without
> > introducing any unwanted limitations or incorrect behavior at the
> > connector level.
> > 
> > I think you should write a ConcurrencyValve instead, which would do
> > something like:
> > 
> >     boolean shouldRelease = false;
> >     try {
> >         concurrencySemaphore.acquire();
> >                 shouldRelease = true;
> >         getNext().invoke(request, response);
> >     } finally {
> >                 if (shouldRelease)
> >             concurrencySemaphore.release();
> >     }
> > 
> > As it is a valve, you can set it globally, on a host, or on an
> > individual webapp, allowing to control concurrency in a fine grained
> > way. In theory, you can also add it on individual servlets, but it
> > requires some hacking. Since it's optional and independent, I think it
> > is acceptable to use Java 5 for it.
> > 
> > As you pointed out, some applications may run horribly with this (slow
> > upload is the most glaring example).
> 
> It took forever (given it's only 10 lines of code), but I added the 
> valve. The class is org.apache.cataline.valves.SemaphoreValve.
> 
> So you can add it at the engine level to add a concurrency constraint 
> for the whole servlet engine, without constraining the connector (which 
> might not be low thread count friendly).
> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 
I tried SemaphoreValve today and it worked as expected. Nice job! :-)

I also tried a JDK1.4 flavor (SemaphoreValve14) which uses Doug Lea's
concurrent.jar and that worked as well (I omitted fairness support and
defaulted to fair.)

Depending on the Doug Lea concurrent jar will be a problem as that jar
is not used in Tomcat.  However, if someone wanted to build it
themselves with their own copy of concurrent.jar, that would work.

Should I post the Java 1.4 flavor of SemaphoreValue14 here?

Scott


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Remy Maucherat <re...@apache.org>.
Remy Maucherat wrote:
> Scott Marlow wrote:
> 
>> Anyway, my point is that this could be a worthwhile enhancement for
>> applications that run on Tomcat.  What I don't understand yet is whether
>> the same functionality is already in Tomcat.
>>
>> I should point out that some applications shouldn't limit the max number
>> of concurrent requests (long running requests won't benefit but maybe
>> those applications shouldn't run on the web tier anyway :-)
> 
> I agree with the intent, but this is not implemented properly. I think
> the idea is to restrict concurrency in the application layer, rather at
> the low level (where, AFIK, concurrency isn't that expensive, and is
> better addressed using a little non blocking IO). The performance
> benefits for certain types of applications will be the same, but without
> introducing any unwanted limitations or incorrect behavior at the
> connector level.
> 
> I think you should write a ConcurrencyValve instead, which would do
> something like:
> 
>     boolean shouldRelease = false;
>     try {
>         concurrencySemaphore.acquire();
>                 shouldRelease = true;
>         getNext().invoke(request, response);
>     } finally {
>                 if (shouldRelease)
>             concurrencySemaphore.release();
>     }
> 
> As it is a valve, you can set it globally, on a host, or on an
> individual webapp, allowing to control concurrency in a fine grained
> way. In theory, you can also add it on individual servlets, but it
> requires some hacking. Since it's optional and independent, I think it
> is acceptable to use Java 5 for it.
> 
> As you pointed out, some applications may run horribly with this (slow
> upload is the most glaring example).

It took forever (given it's only 10 lines of code), but I added the 
valve. The class is org.apache.cataline.valves.SemaphoreValve.

So you can add it at the engine level to add a concurrency constraint 
for the whole servlet engine, without constraining the connector (which 
might not be low thread count friendly).

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Remy Maucherat <re...@apache.org>.
Scott Marlow wrote:
> Anyway, my point is that this could be a worthwhile enhancement for
> applications that run on Tomcat.  What I don't understand yet is whether
> the same functionality is already in Tomcat.
> 
> I should point out that some applications shouldn't limit the max number
> of concurrent requests (long running requests won't benefit but maybe
> those applications shouldn't run on the web tier anyway :-)

I agree with the intent, but this is not implemented properly. I think
the idea is to restrict concurrency in the application layer, rather at
the low level (where, AFIK, concurrency isn't that expensive, and is
better addressed using a little non blocking IO). The performance
benefits for certain types of applications will be the same, but without
introducing any unwanted limitations or incorrect behavior at the
connector level.

I think you should write a ConcurrencyValve instead, which would do
something like:

	boolean shouldRelease = false;
	try {
		concurrencySemaphore.acquire();
                 shouldRelease = true;
		getNext().invoke(request, response);
	} finally {
                 if (shouldRelease)
			concurrencySemaphore.release();
	}

As it is a valve, you can set it globally, on a host, or on an
individual webapp, allowing to control concurrency in a fine grained
way. In theory, you can also add it on individual servlets, but it
requires some hacking. Since it's optional and independent, I think it
is acceptable to use Java 5 for it.

As you pointed out, some applications may run horribly with this (slow
upload is the most glaring example).

Rémy


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Scott Marlow <sc...@gmail.com>.
On Wed, 2005-05-04 at 16:02 +0200, Remy Maucherat wrote:
> Scott Marlow wrote:
> > Hi, 
> > 
> > I wonder if anyone has any feedback on a performance change that I am
> > working on making. 
> > 
> > One benefit of reducing concurrency in a server application is that a
> > small number of requests can complete more quickly than if they had to
> > compete against a large number of running threads for object locks (Java
> > or externally in a database). 
> > 
> > I would like have a Tomcat configuration option to set the max number of
> > concurrent threads that can service user requests.  You might configure
> > Tomcat to handle 800 http client connections but set the max concurrent
> > requests to 20 (perhaps higher if you have more CPUs).  I like to refer
> > to the max concurrent requests setting as the throttle size (if there is
> > a better term, let me know).
> > 
> > I modified the Tomcat Thread.run code to use Doug Lea's semaphore
> > support but didn't expose a configuration option (haven't learned how to
> > do that yet). My basic change is to allow users to specify the max
> > number of concurrent servlet requests that can run. If an application
> > has a high level of concurrency, end users may get more consistent
> > response time with this change. If an application has a low level of
> > concurrency, my change doesn't help as their application only has a few
> > threads running concurrently anyway. 
> > 
> > This also reduces resource use on other tiers. For example, if you are
> > supporting 500 users with a Tomcat instance, you don't need a database
> > connection pool size of 500, instead set the throttle size to 20 and
> > create a database connection pool size of 20. 
> > 
> > Current status of the change: 
> > 
> > 1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
> > hardcoded to a value of 18, should be a configurable option. 
> > 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
> > probably didn't make these changes correctly.  I could switch to using
> > the Java 1.5 implementation of the Concurrent package but we would still
> > need to do something for Java 1.4 compatibility.
> > 
> > Any suggestions on completing this enhancement are appreciated.
> > 
> > Please include my smarlow@novell.com email address in your response.
> 
> I looked at this yesterday, and while it is a cool hack, it is not that 
> useful anymore (and we're also not going to use the concurrent utilities 
> in Tomcat, so it's not really an option before we require Java 5). The 
> main issue is that due to the fact keepalive is done in blocking mode, 
> actual concurrency in the servlet container is unpredictable (the amount 
> of processing threads - maxThreads - will usually be a lot higher than 
> the actual expected concurrency - let's say 100 per CPU). If that issue 
> is solved (we're trying to see if APR is a good solution for it), then 
> the problem goes away.
> 
> Your patch is basically a much nicer implementation of maxThreads 
> (assuming it doesn't reduce performance) which would be useful for the 
> regular HTTP connector, so it's cool, but not worth it. Overall, I think 
> the way maxThreads is done in the APR connector is the easiest (if the 
> amount of workers is too high, wait a bit without accepting anything).
> 
> However, reading the text of the message, you don't seem to realize that 
> a lot of the threads which would actually be doing processing are just 
> blocking for keepalive (hence not doing anything useful; maybe you don't 
> see it in your test). Anyway, congratulations for understanding that 
> ThreadPool code (I stopped using it for new code, since I think it has 
> some limitations and is too complex).
> 
> Rémy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> 

Thank you for all of the replies!

The benefit of reducing concurrency is for the application code more
than the web container.  I last saw the benefit in action on Novell's
IIOP container when I was working on publishing spec.org benchmark
numbers for
(http://www.spec.org/jAppServer2001/results/res2003q4/jAppServer2001-20031118-00016.html).

Prior to setting the max number of concurrent requests allowed to run at
once, I had about 800 communication threads that were also running
application requests.  The application requests would typically do some
local processing and quite a bit of database i/o (database ran on a
different tier).  With 800 application threads running at once, there
was too much contention on shared Java objects (the Java unfair
scheduler made this worse) and database contention.  Some client
requests would take 2 seconds to complete while others would take 40
seconds.

Luckily the Novell Corba orb already had the ability to set the max
number of IIOP requests allowed to run concurrently.  Setting this to 18
didn't impact the communications threads ability to send/receive but
instead restricted the number of application requests being processed at
once to 18.  This mostly eliminated the Java object contention and
tightened the database transactions as there is much less contention
with only 18 requests running at once.  Running 18 requests at a time
gave a more consistent response time.

Other web containers also have the ability to set the max concurrent
requests allowed to run at once.  The MySQL Database InnoDB storage
engine assumes a small "max number of concurrent threads" (see
innodb_thread_concurrency on
http://dev.mysql.com/doc/mysql/en/innodb-start.html).  Other server
products have also encouraged a small number of concurrent application
requests running for similar reasons.

As I mentioned before, I'm just getting started with the Tomcat change
and don't have benchmark results to show yet (may not for a while).  No
worries as I am patient and will cover this at some point or perhaps we
will try this in a customer application to see if its helps.

Anyway, my point is that this could be a worthwhile enhancement for
applications that run on Tomcat.  What I don't understand yet is whether
the same functionality is already in Tomcat.

I should point out that some applications shouldn't limit the max number
of concurrent requests (long running requests won't benefit but maybe
those applications shouldn't run on the web tier anyway :-)

I agree that it is difficult to deal with Java 1.4 versus 1.5 and the
concurrent Java utilities.  Perhaps we could use the 1.5 support and
implement this class in the Tomcat 1.4 compatibility layer.  The 1.5
java.util.concurrent.Semaphore class would probably be used for this
(http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Semaphore.html).

-Scott


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Mladen Turk <mt...@apache.org>.
Costin Manolache wrote:
>> No it doesn't. If the connection is keep-alive, and there is no activity
>> for 100ms, the socket is put in the poller, and that thread is freed.
>> When the next data on that socket arrives, the socket is signaled and
>> passed to the thread pool.
>>
>> Mladen.
> 
> 
> Sorry, I missed that. So we can have as many 'keep alive' idle as we 
> want - only those active are taking threads ?

Yes. You will need APR HEAD if using WIN32 and wish more the 64 of them.
On most other platforms, the limit is controlled by ulimit.


> Which file implements this 
> ( the 100ms timeout and poller ) ?

When the active thread finishes the response that is going to be the
keep-alive it reads with timeout for 100ms. Thus if the next keep-alive
request comes within 100ms it is handled immediately.
If not it is passed to the poller.

> I assume this is only done in the APR 
> connector, or is it implemented in java as well ( nio ) ? .
> 

jakarta-tomcat-connectors/jni

We have APR and thin native JNI glue code, basically dealing with apr
types and pointers.

Regards,
Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Mladen Turk <mt...@apache.org>.
Costin Manolache wrote:
> Which file implements this 
> ( the 100ms timeout and poller ) ?

Poller is inside:
/jakarta-tomcat-connectors/util/java/org/apache/tomcat/util/net/AprEndpoint.java
100ms timeout and passing to poller is in:
/jakarta-tomcat-connectors/http11/src/java/org/apache/coyote/http11/Http11AprProcessor.java

Rest is inside:
/jakarta-tomcat-connectors/jni

Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Remy Maucherat <re...@apache.org>.
Costin Manolache wrote:
> Mladen Turk wrote:
> 
>> Costin Manolache wrote:
>>
>>> I'm still trying to understand the APR connector, but from what I see 
>>> it is still mapping one socket ( 'keep alive' connection ) per thread. 
>>
>> No it doesn't. If the connection is keep-alive, and there is no activity
>> for 100ms, the socket is put in the poller, and that thread is freed.
>> When the next data on that socket arrives, the socket is signaled and
>> passed to the thread pool.
> 
> Sorry, I missed that. So we can have as many 'keep alive' idle as we 
> want - only those active are taking threads ? Which file implements this 
> ( the 100ms timeout and poller ) ? I assume this is only done in the APR 
> connector, or is it implemented in java as well ( nio ) ? .

What I like is that it does it *and* it still is extremely similar to 
the regular blocking HTTP connector.

 From Http11AprProcessor.process:

                 if (!inputBuffer.parseRequestLine()) {
                     // This means that no data is available right now
                     // (long keepalive), so that the processor should 
be recycled
                     // and the method should return true
                     rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
                     openSocket = true;
                     // Add the socket to the poller
                     endpoint.getPoller().add(socket, pool);
                     break;
                 }

The 100ms before going to the poller is to optimize a little the 
pipelining case (assuming it does optimize something - I don't know).

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Costin Manolache <cm...@yahoo.com>.
Mladen Turk wrote:
> Costin Manolache wrote:
> 
>>
>> I'm still trying to understand the APR connector, but from what I see 
>> it is still mapping one socket ( 'keep alive' connection ) per thread. 
> 
> 
> No it doesn't. If the connection is keep-alive, and there is no activity
> for 100ms, the socket is put in the poller, and that thread is freed.
> When the next data on that socket arrives, the socket is signaled and
> passed to the thread pool.
> 
> Mladen.

Sorry, I missed that. So we can have as many 'keep alive' idle as we 
want - only those active are taking threads ? Which file implements this 
( the 100ms timeout and poller ) ? I assume this is only done in the APR 
connector, or is it implemented in java as well ( nio ) ? .

Costin


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Mladen Turk <mt...@apache.org>.
Costin Manolache wrote:
> 
> I'm still trying to understand the APR connector, but from what I see it 
> is still mapping one socket ( 'keep alive' connection ) per thread. 

No it doesn't. If the connection is keep-alive, and there is no activity
for 100ms, the socket is put in the poller, and that thread is freed.
When the next data on that socket arrives, the socket is signaled and
passed to the thread pool.

Mladen.

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Costin Manolache <cm...@yahoo.com>.
Remy Maucherat wrote:

> I looked at this yesterday, and while it is a cool hack, it is not that 
> useful anymore (and we're also not going to use the concurrent utilities 
> in Tomcat, so it's not really an option before we require Java 5). The 
> main issue is that due to the fact keepalive is done in blocking mode, 
> actual concurrency in the servlet container is unpredictable (the amount 
> of processing threads - maxThreads - will usually be a lot higher than 
> the actual expected concurrency - let's say 100 per CPU). If that issue 
> is solved (we're trying to see if APR is a good solution for it), then 
> the problem goes away.

I'm still trying to understand the APR connector, but from what I see it 
is still mapping one socket ( 'keep alive' connection ) per thread. 
That's how it allways worked - but it's not necesarily the best 
solution. The only thing that is required is to have a thread per active 
request - the sleepy keep alives don't need thread ( that could be 
implemented using select in the apr, or nio in java )



> 
> Your patch is basically a much nicer implementation of maxThreads 
> (assuming it doesn't reduce performance) which would be useful for the 
> regular HTTP connector, so it's cool, but not worth it. Overall, I think 
> the way maxThreads is done in the APR connector is the easiest (if the 
> amount of workers is too high, wait a bit without accepting anything).

That's a tricky issue :-) In some cases ( like load balancing ) not 
accepting is the right solution, but in other cases droping connections 
is not what people want ( in particular if most of the threads are just 
waiting on keep alives ).

( sorry if I missed some details in the new implementation :-)

Costin



---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Re: Tomcat performance patch (in development) to reduce concurrency...

Posted by Remy Maucherat <re...@apache.org>.
Scott Marlow wrote:
> Hi, 
> 
> I wonder if anyone has any feedback on a performance change that I am
> working on making. 
> 
> One benefit of reducing concurrency in a server application is that a
> small number of requests can complete more quickly than if they had to
> compete against a large number of running threads for object locks (Java
> or externally in a database). 
> 
> I would like have a Tomcat configuration option to set the max number of
> concurrent threads that can service user requests.  You might configure
> Tomcat to handle 800 http client connections but set the max concurrent
> requests to 20 (perhaps higher if you have more CPUs).  I like to refer
> to the max concurrent requests setting as the throttle size (if there is
> a better term, let me know).
> 
> I modified the Tomcat Thread.run code to use Doug Lea's semaphore
> support but didn't expose a configuration option (haven't learned how to
> do that yet). My basic change is to allow users to specify the max
> number of concurrent servlet requests that can run. If an application
> has a high level of concurrency, end users may get more consistent
> response time with this change. If an application has a low level of
> concurrency, my change doesn't help as their application only has a few
> threads running concurrently anyway. 
> 
> This also reduces resource use on other tiers. For example, if you are
> supporting 500 users with a Tomcat instance, you don't need a database
> connection pool size of 500, instead set the throttle size to 20 and
> create a database connection pool size of 20. 
> 
> Current status of the change: 
> 
> 1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
> hardcoded to a value of 18, should be a configurable option. 
> 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
> probably didn't make these changes correctly.  I could switch to using
> the Java 1.5 implementation of the Concurrent package but we would still
> need to do something for Java 1.4 compatibility.
> 
> Any suggestions on completing this enhancement are appreciated.
> 
> Please include my smarlow@novell.com email address in your response.

I looked at this yesterday, and while it is a cool hack, it is not that 
useful anymore (and we're also not going to use the concurrent utilities 
in Tomcat, so it's not really an option before we require Java 5). The 
main issue is that due to the fact keepalive is done in blocking mode, 
actual concurrency in the servlet container is unpredictable (the amount 
of processing threads - maxThreads - will usually be a lot higher than 
the actual expected concurrency - let's say 100 per CPU). If that issue 
is solved (we're trying to see if APR is a good solution for it), then 
the problem goes away.

Your patch is basically a much nicer implementation of maxThreads 
(assuming it doesn't reduce performance) which would be useful for the 
regular HTTP connector, so it's cool, but not worth it. Overall, I think 
the way maxThreads is done in the APR connector is the easiest (if the 
amount of workers is too high, wait a bit without accepting anything).

However, reading the text of the message, you don't seem to realize that 
a lot of the threads which would actually be doing processing are just 
blocking for keepalive (hence not doing anything useful; maybe you don't 
see it in your test). Anyway, congratulations for understanding that 
ThreadPool code (I stopped using it for new code, since I think it has 
some limitations and is too complex).

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


RE: Tomcat performance patch (in development) to reduce concurrency...

Posted by Yoav Shapira <yo...@MIT.EDU>.
Hi,
Repeatable benchmarks showing a significant improvement for some use case
would be appreciated (certainly) and a prerequisite (probably) for addition
into this relatively core part of Tomcat.  I don't think this is much
different than setting the current maxThreads (and min/max Spare threads) as
opposed to acceptCount: one could set maxThreads to 20 and acceptCount to
500, for example.

Yoav Shapira
System Design and Management Fellow
MIT Sloan School of Management / School of Engineering
Cambridge, MA USA
yoavsh@sloan.mit.edu / yoavs@computer.org

> -----Original Message-----
> From: Scott Marlow [mailto:scott.marlow.opensource@gmail.com]
> Sent: Wednesday, May 04, 2005 9:42 AM
> To: tomcat-dev@jakarta.apache.org
> Cc: smarlow@novell.com
> Subject: Tomcat performance patch (in development) to reduce
concurrency...
> 
> Hi,
> 
> I wonder if anyone has any feedback on a performance change that I am
> working on making.
> 
> One benefit of reducing concurrency in a server application is that a
> small number of requests can complete more quickly than if they had to
> compete against a large number of running threads for object locks (Java
> or externally in a database).
> 
> I would like have a Tomcat configuration option to set the max number of
> concurrent threads that can service user requests.  You might configure
> Tomcat to handle 800 http client connections but set the max concurrent
> requests to 20 (perhaps higher if you have more CPUs).  I like to refer
> to the max concurrent requests setting as the throttle size (if there is
> a better term, let me know).
> 
> I modified the Tomcat Thread.run code to use Doug Lea's semaphore
> support but didn't expose a configuration option (haven't learned how to
> do that yet). My basic change is to allow users to specify the max
> number of concurrent servlet requests that can run. If an application
> has a high level of concurrency, end users may get more consistent
> response time with this change. If an application has a low level of
> concurrency, my change doesn't help as their application only has a few
> threads running concurrently anyway.
> 
> This also reduces resource use on other tiers. For example, if you are
> supporting 500 users with a Tomcat instance, you don't need a database
> connection pool size of 500, instead set the throttle size to 20 and
> create a database connection pool size of 20.
> 
> Current status of the change:
> 
> 1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
> hardcoded to a value of 18, should be a configurable option.
> 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
> probably didn't make these changes correctly.  I could switch to using
> the Java 1.5 implementation of the Concurrent package but we would still
> need to do something for Java 1.4 compatibility.
> 
> Any suggestions on completing this enhancement are appreciated.
> 
> Please include my smarlow@novell.com email address in your response.
> 
> Thank you,
> Scott Marlow --- Tomcat newbie


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Tomcat performance patch (in development) to reduce concurrency...

Posted by Scott Marlow <sc...@gmail.com>.
Hi, 

I wonder if anyone has any feedback on a performance change that I am
working on making. 

One benefit of reducing concurrency in a server application is that a
small number of requests can complete more quickly than if they had to
compete against a large number of running threads for object locks (Java
or externally in a database). 

I would like have a Tomcat configuration option to set the max number of
concurrent threads that can service user requests.  You might configure
Tomcat to handle 800 http client connections but set the max concurrent
requests to 20 (perhaps higher if you have more CPUs).  I like to refer
to the max concurrent requests setting as the throttle size (if there is
a better term, let me know).

I modified the Tomcat Thread.run code to use Doug Lea's semaphore
support but didn't expose a configuration option (haven't learned how to
do that yet). My basic change is to allow users to specify the max
number of concurrent servlet requests that can run. If an application
has a high level of concurrency, end users may get more consistent
response time with this change. If an application has a low level of
concurrency, my change doesn't help as their application only has a few
threads running concurrently anyway. 

This also reduces resource use on other tiers. For example, if you are
supporting 500 users with a Tomcat instance, you don't need a database
connection pool size of 500, instead set the throttle size to 20 and
create a database connection pool size of 20. 

Current status of the change: 

1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
hardcoded to a value of 18, should be a configurable option. 
2. I hacked the build scripts to include Doug Lea's concurrent.jar but
probably didn't make these changes correctly.  I could switch to using
the Java 1.5 implementation of the Concurrent package but we would still
need to do something for Java 1.4 compatibility.

Any suggestions on completing this enhancement are appreciated.

Please include my smarlow@novell.com email address in your response.

Thank you,
Scott Marlow --- Tomcat newbie