You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Filip Hanik - Dev Lists <de...@hanik.com> on 2007/03/09 19:45:13 UTC
Tomcat 6 Scales
I wrote a blog entry on how one of our connectors was developed the
challenges you face doing that.
Its not super technical as I'm saving the juicy details for ApacheCon
And since no one reads my blog, I'll let you guys get it from here :)
http://blog.covalent.net/roller/covalent/entry/20070308
Filip
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Michael Clovis wrote:
> Filip,
> Great article. We were already having some memory issues using the
> NIO connector in 6.0.10 yet REALLY need this functionality. Our quick
> question is the following , can we in your estimation use the nightly
> build of your code and apply to 6.0.10 until version 11 releases?
Yes, I would take the NIO connector as it is today and use it, it has
been greatly improved.
svn co http://svn.apache.org/repos/asf/tomcat/tc6.0.x/trunk
cd trunk
ant download
ant
then take tomcat-coyote.jar from output/build/lib and use that one
Filip
>
>
> Filip Hanik - Dev Lists wrote:
>> I wrote a blog entry on how one of our connectors was developed the
>> challenges you face doing that.
>> Its not super technical as I'm saving the juicy details for ApacheCon
>>
>> And since no one reads my blog, I'll let you guys get it from here :)
>>
>> http://blog.covalent.net/roller/covalent/entry/20070308
>>
>> Filip
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Michael Clovis <mc...@mindbridge.com>.
Filip,
Great article. We were already having some memory issues using the
NIO connector in 6.0.10 yet REALLY need this functionality. Our quick
question is the following , can we in your estimation use the nightly
build of your code and apply to 6.0.10 until version 11 releases?
Filip Hanik - Dev Lists wrote:
> I wrote a blog entry on how one of our connectors was developed the
> challenges you face doing that.
> Its not super technical as I'm saving the juicy details for ApacheCon
>
> And since no one reads my blog, I'll let you guys get it from here :)
>
> http://blog.covalent.net/roller/covalent/entry/20070308
>
> Filip
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Mladen Turk <mt...@apache.org>.
Henri Gomez wrote:
> Great article !
>
I agree. But like Filip said, the entire NIO
(as well as APR) is sort of a hack.
It is obvious that the current JSE spec doesn't
fit for hybrid logic (both blocking and non-blocking)
because the cost of switching between them is simply
to high for any practical reason.
Since we have enough expertize in the subject, perhaps
we can propose some additions to the JSE NIO specs
that would satisfy the operations needed instead
constantly hacking the NIO and trying to achieve the
possible maximum out of it, or creating a new layer
powered by APR for example.
IMHO the initial designers of NIO overlooked couple
of important aspects, and that is confirmed by the
simple fact that the number of web related designs
that are using NIO is negligible to the years the NIO
is present as a part of JSE spec.
Regards,
Mladen.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Henri Gomez <he...@gmail.com>.
Great article !
I wonder now what could be done when AJP is used instead of Coyote
HTTP connector ?
How does it fit ?
Regards
2007/3/9, Filip Hanik - Dev Lists <de...@hanik.com>:
> I wrote a blog entry on how one of our connectors was developed the
> challenges you face doing that.
> Its not super technical as I'm saving the juicy details for ApacheCon
>
> And since no one reads my blog, I'll let you guys get it from here :)
>
> http://blog.covalent.net/roller/covalent/entry/20070308
>
> Filip
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
correct, only 1GB of RAM, -Xmx512m for the Tomcat container.
Filip
Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>> Remy Maucherat wrote:
>>> Filip Hanik - Dev Lists wrote:
>>>> I wrote a blog entry on how one of our connectors was developed the
>>>> challenges you face doing that.
>>>> Its not super technical as I'm saving the juicy details for ApacheCon
>>>>
>>>> And since no one reads my blog, I'll let you guys get it from here :)
>>>>
>>>> http://blog.covalent.net/roller/covalent/entry/20070308
>
> Ok, I did read it in detail. Really good results since it has only 1GB
> of RAM (did I read it right ?).
>
>> There is an area of the AprEndpoint that needs to be fixed before it
>> happens though, currently the "Acceptor" thread in APR does this
>> long socket = Socket.accept(serverSock);
>> if (!processSocketWithOptions(socket)) {
>> Socket.destroy(socket);
>> }
>>
>> The processSocketWithOptions is a blocking call, hence you wont be
>> able to acccept new connections as long as your worker threads are
>> all busy. What we need to do, is set the socket options, then simply
>> add the socket to the poller waiting for a read event, the poller
>> will assign it a worker thread when one is available and the socket
>> has data to be read.
>
> I don't really believe in this sort of solution (especially since APR
> uses deferred accepts automagically). For sure, I am against always
> adding the socket to a poller right away, this would be most likely be
> useless overhead. If there are no threads available, then using the
> main poller could be sensible, but it could be more productive (and
> easier) simply adding the socket to a structure containing longs and
> processing it later.
>
> If using a poller all the time, you can try a test where you bombard
> the server with HTTP/1.0 requests - no keep-alive - and it would most
> likely perform somewhat worse due to the poller overhead (could be
> worth measuring).
>
>> I will contact you before I run the test to make sure I got
>> everything configured, I see
>> But before ApacheCon I will have those numbers, as in my presentation
>> I will focus on APR and NIO, the old connector is not of that much
>> interest anymore :)
>
> I am not sure blocking IO is bad, actually. It's very easy to program,
> and IMO it could work quite well in the future with the exploding
> amount of CPU cores and smarters OSes, the only remaining issue being
> the higher memory usage due to the resource usage of a thread (most
> likely this could be fixed as well).
>
> Rémy
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
William A. Rowe, Jr. wrote:
> Remy Maucherat wrote:
>> I don't really believe in this sort of solution (especially since APR
>> uses deferred accepts automagically).
>
> To clarify, httpd 2.2 automagically adds default socket filters (data,
> or http headers, where the platform supports them). AFAIK APR does not
> by default. If it did, the other 'half' of the protocols would be borked.
I meant I was told it is supposed to do it when adding the right option
(which is done in the APR connector). I can imagine it could cause
problems if enabled by default.
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
Remy Maucherat wrote:
>
> I don't really believe in this sort of solution (especially since APR
> uses deferred accepts automagically).
To clarify, httpd 2.2 automagically adds default socket filters (data,
or http headers, where the platform supports them). AFAIK APR does not
by default. If it did, the other 'half' of the protocols would be borked.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Remy Maucherat wrote:
>> Filip Hanik - Dev Lists wrote:
>>> I wrote a blog entry on how one of our connectors was developed the
>>> challenges you face doing that.
>>> Its not super technical as I'm saving the juicy details for ApacheCon
>>>
>>> And since no one reads my blog, I'll let you guys get it from here :)
>>>
>>> http://blog.covalent.net/roller/covalent/entry/20070308
Ok, I did read it in detail. Really good results since it has only 1GB
of RAM (did I read it right ?).
> There is an area of the AprEndpoint that needs to be fixed before it
> happens though, currently the "Acceptor" thread in APR does this
> long socket = Socket.accept(serverSock);
> if (!processSocketWithOptions(socket)) {
> Socket.destroy(socket);
> }
>
> The processSocketWithOptions is a blocking call, hence you wont be able
> to acccept new connections as long as your worker threads are all busy.
> What we need to do, is set the socket options, then simply add the
> socket to the poller waiting for a read event, the poller will assign it
> a worker thread when one is available and the socket has data to be read.
I don't really believe in this sort of solution (especially since APR
uses deferred accepts automagically). For sure, I am against always
adding the socket to a poller right away, this would be most likely be
useless overhead. If there are no threads available, then using the main
poller could be sensible, but it could be more productive (and easier)
simply adding the socket to a structure containing longs and processing
it later.
If using a poller all the time, you can try a test where you bombard the
server with HTTP/1.0 requests - no keep-alive - and it would most likely
perform somewhat worse due to the poller overhead (could be worth
measuring).
> I will contact you before I run the test to make sure I got everything
> configured, I see
> But before ApacheCon I will have those numbers, as in my presentation I
> will focus on APR and NIO, the old connector is not of that much
> interest anymore :)
I am not sure blocking IO is bad, actually. It's very easy to program,
and IMO it could work quite well in the future with the exploding amount
of CPU cores and smarters OSes, the only remaining issue being the
higher memory usage due to the resource usage of a thread (most likely
this could be fixed as well).
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Remy Maucherat wrote:
>> Mladen Turk wrote:
>>> (backlog)
>>
>> For some reason, I have yet to see that backlog behave like it is
>> supposed to in Tomcat.
>>
>> As my proposed long[] array is (supposedly) the same thing as the OS
>> backlog, maybe Filip can experiment with the "backlog" attribute (by
>> default, it's only 100, but could be set to a large value, like 20000,
>> to see what it does in his test).
> yes, I'll do that indeed, it will be an interesting test. One it will
> test if they actually do get backlogged, the second one if the backlog
> will get serviced before it times out.
>
> Thanks everyone for the feedback, I'll let you know how everything
> progresses.
Ok, so I'm waiting for some more results before trying to change the
implementation.
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Mladen Turk <mt...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Remy Maucherat wrote:
>> Mladen Turk wrote:
>
> Thanks everyone for the feedback, I'll let you know how everything
> progresses.
Be sure to read the
http://www.faqs.org/rfcs/rfc1925.html :)
Regards,
Mladen.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Mladen Turk <mt...@apache.org>.
Costin Manolache wrote:
> Yes, 100 concurent requests is a sign you need lb - serving 1000 on
Sometimes it is desired to have the capability of serving 1000 concurrent
connections (not requests). The typical situation is when the frontend
server is used for delivering a static content with higher concurrency
then backend application server. The thread per request model like
implemented with APR connector solves this problem. NIO would need
AJP protocol implementation to be able to do that as well.
> one server is a false problem in most cases. I would rather have a
> server smartly reject requests and notify a lb rather then degrading
> all requests by accepting more than it can handle properly.
>
Agreed, but without some sort of IPC it's impossible.
Regards,
Mladen.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Costin Manolache <co...@apache.org>.
Yes, 100 concurent requests is a sign you need lb - serving 1000 on
one server is a false problem in most cases. I would rather have a
server smartly reject requests and notify a lb rather then degrading
all requests by accepting more than it can handle properly.
Try adding a database access or some realistic operation in the test
servlet, and set the goal as 'no request above 1 second'. That would
be a nice problem.
Costin
On 3/11/07, Henri Gomez <he...@gmail.com> wrote:
> 2007/3/11, Costin Manolache <co...@gmail.com>:
> > Great work - but I'm curious, wouldn't be better to explore the
> alternative
> > direction - i.e. detect when the server is too loaded and send a quick 502
> ?
> >
> > Maybe with some extra logic - like serve existing sessions first,
> > provide some notifications that can be used by a load balancer ( or pager
> > :-)
> > to up more servers, or some notification to be used to disable some
> > expensive
> > functionality ?
>
> Something very welcome. Currently the HTTP/AJP couple is great for may
> of us but we still need better dynamic load switching. It's the same
> old question, how a load-balancer, ie mod_jk in lb, could know when to
> use a less loaded worker ?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Henri Gomez <he...@gmail.com>.
2007/3/11, Costin Manolache <co...@gmail.com>:
> Great work - but I'm curious, wouldn't be better to explore the alternative
> direction - i.e. detect when the server is too loaded and send a quick 502 ?
>
> Maybe with some extra logic - like serve existing sessions first,
> provide some notifications that can be used by a load balancer ( or pager
> :-)
> to up more servers, or some notification to be used to disable some
> expensive
> functionality ?
Something very welcome. Currently the HTTP/AJP couple is great for may
of us but we still need better dynamic load switching. It's the same
old question, how a load-balancer, ie mod_jk in lb, could know when to
use a less loaded worker ?
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Costin Manolache wrote:
> Great work - but I'm curious, wouldn't be better to explore the
> alternative
> direction - i.e. detect when the server is too loaded and send a quick
> 502 ?
I totally agree, and the way its designed, this is totally doable. so
whenever the existing connection count is more than the configured
"limit" send a 502 through the acceptor thread and close the connection.
>
> Maybe with some extra logic - like serve existing sessions first,
> provide some notifications that can be used by a load balancer ( or pager
> :-)
> to up more servers, or some notification to be used to disable some
> expensive
> functionality ?
Yes, that would be a tougher one, trying to correlate a new connection
to an existing session.
If you wanna do that, you have to let the connection in on a worker
thread, and then its pointless.
>
> In a real world situation - I would rather have the server not accept
> 20.000
> connection and do all of them badly. It's in the same line with
> 'fairness'
> and handling slashdot-ing, but in a different way.
I'm thinking of implementing the limit, so that keep alives get turned
off if there are too many connections. That way we should still be able
still keep everything rolling.
In a world where we would get 20k connections, I doubt they would hammer
the server like ab does :) so it should still be possible to keep this
many connections alive.
Filip
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Costin Manolache <co...@gmail.com>.
Great work - but I'm curious, wouldn't be better to explore the alternative
direction - i.e. detect when the server is too loaded and send a quick 502 ?
Maybe with some extra logic - like serve existing sessions first,
provide some notifications that can be used by a load balancer ( or pager
:-)
to up more servers, or some notification to be used to disable some
expensive
functionality ?
In a real world situation - I would rather have the server not accept 20.000
connection and do all of them badly. It's in the same line with 'fairness'
and handling slashdot-ing, but in a different way.
Costin
On 3/10/07, Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>
> Remy Maucherat wrote:
> > Mladen Turk wrote:
> >> (backlog)
> >
> > For some reason, I have yet to see that backlog behave like it is
> > supposed to in Tomcat.
> >
> > As my proposed long[] array is (supposedly) the same thing as the OS
> > backlog, maybe Filip can experiment with the "backlog" attribute (by
> > default, it's only 100, but could be set to a large value, like 20000,
> > to see what it does in his test).
> yes, I'll do that indeed, it will be an interesting test. One it will
> test if they actually do get backlogged, the second one if the backlog
> will get serviced before it times out.
>
> Thanks everyone for the feedback, I'll let you know how everything
> progresses.
> Filip
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Remy Maucherat wrote:
> Mladen Turk wrote:
>> (backlog)
>
> For some reason, I have yet to see that backlog behave like it is
> supposed to in Tomcat.
>
> As my proposed long[] array is (supposedly) the same thing as the OS
> backlog, maybe Filip can experiment with the "backlog" attribute (by
> default, it's only 100, but could be set to a large value, like 20000,
> to see what it does in his test).
yes, I'll do that indeed, it will be an interesting test. One it will
test if they actually do get backlogged, the second one if the backlog
will get serviced before it times out.
Thanks everyone for the feedback, I'll let you know how everything
progresses.
Filip
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Mladen Turk wrote:
> (backlog)
For some reason, I have yet to see that backlog behave like it is
supposed to in Tomcat.
As my proposed long[] array is (supposedly) the same thing as the OS
backlog, maybe Filip can experiment with the "backlog" attribute (by
default, it's only 100, but could be set to a large value, like 20000,
to see what it does in his test).
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Remy Maucherat wrote:
>> Filip Hanik - Dev Lists wrote:
>>> We're doing pretty well with Comet, the only thing comet is missing
>>> is a non blocking write.
>>
>> It is possible to do that without changing the API, in case it is
>> needed. It has a possibly significant cost however (buffering all data
>> which cannot be sent right away), so I am not sure it is a very good
>> idea.
> I need to think about this one for a bit. the NIO connector supports non
> blocking write, ie, if it can't be written it wont, I am just starting
> to noodle on how this can be done in an easy-to-use-API way if you know
> what I mean. Is there an API for APR to ask "can I write without block?"?
Yes, it's easy, but I am not convinced there's a real need right now
(and as it introduces possibly heavy resource costs). Since it can be
transparent, it is possible to experiment.
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>> We're doing pretty well with Comet, the only thing comet is missing
>> is a non blocking write.
>
> It is possible to do that without changing the API, in case it is
> needed. It has a possibly significant cost however (buffering all data
> which cannot be sent right away), so I am not sure it is a very good
> idea.
I need to think about this one for a bit. the NIO connector supports non
blocking write, ie, if it can't be written it wont, I am just starting
to noodle on how this can be done in an easy-to-use-API way if you know
what I mean. Is there an API for APR to ask "can I write without block?"?
Filip
>
> Rémy
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> We're doing pretty well with Comet, the only thing comet is missing is a
> non blocking write.
It is possible to do that without changing the API, in case it is
needed. It has a possibly significant cost however (buffering all data
which cannot be sent right away), so I am not sure it is a very good idea.
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Mladen Turk wrote:
> Filip Hanik - Dev Lists wrote:
>> Mladen Turk wrote:
>>
>> The ultimate goal is to have 20k connections and still handle them
>> evenly.
>>
>
> The question is what will you do with those 20K connections.
The goal is that they will eventually get serviced, and that is the key.
in my test, with around 4k requests/second, each connection should get a
request in every 5 seconds. However, if fairness is not implemented,
some connections are very likely to never get serviced. If the
acceptor(new connections) is competing with the poller(keep alive
connections) there is a risk for new connections not getting a say in
the game.
> The current servlet implementation as well as http protocol
> is transactional (request/response), and presumes that there
> is no thread context switches during that transaction.
> So, you are limited by design to handle the keep-alive only
> on the opened connection.
> If there is no keep-alive you are just filling the queue
> with incoming connections, that needs to be served by limited
> number of worker threads. The worker thread can be reused only
> when the transaction ends (request/response).
A very valid point, there should be a "limit" attribute for the max
number of keep alive connections, or connections period.
Otherwise, if you are on Linux for example, if you get too many
connections, and start taking up
too much memory (socket buffers take up a lot), Linux kills the java
process instead of letting it take up even more RAM. The linux kernel
has a harsh but effective way of dealing with outside Java heap OOM
errors. :)
>
>
> True async behavior would need a true async servlet specification,
We're doing pretty well with Comet, the only thing comet is missing is a
non blocking write.
Ie, if I have 1 thread servicing 1000 comet connections, I cant afford
to get stuck on one.
If the Comet API will let me query the response to see if I can write
without block, I can achieve that pretty well.
> with all the thread local data persisted to a socket key, but then again
> the memory consumption would be as large as traditional one.
> Of course one can always try to propose something like Restlet
> and hope people will start programming concurrently :)
yeah, when folks like us barely program concurrently, it would be
wishful thinking :)
I'm planning
Filip
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Mladen Turk <mt...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Mladen Turk wrote:
>
> The ultimate goal is to have 20k connections and still handle them evenly.
>
The question is what will you do with those 20K connections.
The current servlet implementation as well as http protocol
is transactional (request/response), and presumes that there
is no thread context switches during that transaction.
So, you are limited by design to handle the keep-alive only
on the opened connection.
If there is no keep-alive you are just filling the queue
with incoming connections, that needs to be served by limited
number of worker threads. The worker thread can be reused only
when the transaction ends (request/response).
True async behavior would need a true async servlet specification,
with all the thread local data persisted to a socket key, but then again
the memory consumption would be as large as traditional one.
Of course one can always try to propose something like Restlet
and hope people will start programming concurrently :)
Regards,
Mladen
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Mladen Turk wrote:
> Filip Hanik - Dev Lists wrote:
>>
>> The processSocketWithOptions is a blocking call, hence you wont be
>> able to acccept new connections as long as your worker threads are
>> all busy.
>
> Not entirely true.
>
>> What we need to do, is set the socket options, then simply add the
>> socket to the poller waiting for a read event, the poller will assign
>> it a worker thread when one is available and the socket has data to
>> be read.
>>
>> This was one of the major improvements I just did in the NIO
>> connector. otherwise you can't accept connections fast enough and
>> they will get dropped.
>>
>
> Basically you simulate the OS socket implementation pending connections
> queue (backlog) on the OSI layer 7 instead depending on the layer 5
> provided by the OS.
Well, you'd be nuts to not run your webserver without a kernel filter,
either 1-st byte accept on linux or the httpfilter on freebsd. My point
goes beyond just accepting connections.
So if we use those filters, we are not simulating the backlog, the
backlog is not needed if the OS through a kernel filter already takes
care of accepting the connection to the client.
By using relying on the backlog, you'll simply running the risk of not
serving those connections. The backlog should only be used if your
acceptor can't accept connections fast enough.
>
> The only beneficiary for that would be lab test environment where you
> have a lots of burst connections, then a void, then a connection burst
> again (not an real-life situation thought). The point is that no matter
> how large your queue is (and how it's done) if the connection rate is
> higher then a processing rate, your connections will be rejected at some
> point. So, it's all about tuning.
You're missing the point. I believe I mentioned it in the article, that
the challenge during this extreme concurrency conditions, (burst or no
burst), will become fairness. I'm in the process of implementing the
connection fairness in the NIO connector. Relying on
synchronized/wait/notify from multiple threads (acceptor and poller)
does not guarantee you anything, and you completely lose control of what
connection gets handled vs what should should get handled.
So I'm simplifying it, since its easier to implement proper fairness on
a single thread than on multiple threads. Hence, when an accept occurs,
set the socket options and register it with the poller.
Then let the poller become the "scheduler" if you may say so, ie, the
poller gets to decide what connections get handled. And to properly
achieve this, the poller nor the acceptor can get stuck on a
synchronized/wait state.
Remember, my goal is not simply to be able to accept as many connections
as possible, that would be pointless if I can't serve requests on them
anyway.
The ultimate goal is to have 20k connections and still handle them evenly.
Hope that makes sense, the article itself was just to demonstrate how
these implementations handled different situations. Although, I wouldn't
claim that a 500 connection burst is purely lab environment. Post an
article on digg.com and you'll running the risk of getting a burst just
like that :)
Filip
>
> Regards,
> Mladen.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Mladen Turk <mt...@apache.org>.
Filip Hanik - Dev Lists wrote:
>
> The processSocketWithOptions is a blocking call, hence you wont be able
> to acccept new connections as long as your worker threads are all busy.
Not entirely true.
> What we need to do, is set the socket options, then simply add the
> socket to the poller waiting for a read event, the poller will assign it
> a worker thread when one is available and the socket has data to be read.
>
> This was one of the major improvements I just did in the NIO connector.
> otherwise you can't accept connections fast enough and they will get
> dropped.
>
Basically you simulate the OS socket implementation pending connections
queue (backlog) on the OSI layer 7 instead depending on the layer 5
provided by the OS.
The only beneficiary for that would be lab test environment where you
have a lots of burst connections, then a void, then a connection burst
again (not an real-life situation thought). The point is that no matter
how large your queue is (and how it's done) if the connection rate is
higher then a processing rate, your connections will be rejected at some
point. So, it's all about tuning.
Regards,
Mladen.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>> I wrote a blog entry on how one of our connectors was developed the
>> challenges you face doing that.
>> Its not super technical as I'm saving the juicy details for ApacheCon
>>
>> And since no one reads my blog, I'll let you guys get it from here :)
>>
>> http://blog.covalent.net/roller/covalent/entry/20070308
>
> And what about the APR connector (assuming you increase the pollerSize
> to an appropriate value, since it's "only" 8000 by default) ?
That will be in the next blog, this time I only wanted to compare NIO
connector, as the coding of it gets complex since you have to "force"
blocking IO on a non blocking API.
There is an area of the AprEndpoint that needs to be fixed before it
happens though, currently the "Acceptor" thread in APR does this
long socket = Socket.accept(serverSock);
if (!processSocketWithOptions(socket)) {
Socket.destroy(socket);
}
The processSocketWithOptions is a blocking call, hence you wont be able
to acccept new connections as long as your worker threads are all busy.
What we need to do, is set the socket options, then simply add the
socket to the poller waiting for a read event, the poller will assign it
a worker thread when one is available and the socket has data to be read.
This was one of the major improvements I just did in the NIO connector.
otherwise you can't accept connections fast enough and they will get
dropped.
I'll attempt a patch that you can review, unless you wanna jump on it
directly.
I will contact you before I run the test to make sure I got everything
configured, I see
But before ApacheCon I will have those numbers, as in my presentation I
will focus on APR and NIO, the old connector is not of that much
interest anymore :)
Filip
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org
Re: Tomcat 6 Scales
Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> I wrote a blog entry on how one of our connectors was developed the
> challenges you face doing that.
> Its not super technical as I'm saving the juicy details for ApacheCon
>
> And since no one reads my blog, I'll let you guys get it from here :)
>
> http://blog.covalent.net/roller/covalent/entry/20070308
And what about the APR connector (assuming you increase the pollerSize
to an appropriate value, since it's "only" 8000 by default) ?
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org