You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Christopher Schultz <ch...@christopherschultz.net> on 2014/02/25 07:03:15 UTC

Connectors, blocking, and keepalive

All,

I'm looking at the comparison table at the bottom of the HTTP connectors
page, and I have a few questions about it.

First, what does "Polling size" mean?

Second, under the NIO connector, both "Read HTTP Body" and "Write HTTP
Response" say that they are "sim-Blocking"... does that mean that the
API itself is stream-based (i.e. blocking) but that the actual
under-the-covers behavior is to use non-blocking I/O? It is important to
make that distinction since the end user (the code) can't tell the
difference? Unless there is another thread pushing the bytes back to the
client for instance, the request-processing thread is tied-up performing
I/O whether it's doing blocking I/O or non-blocking I/O, right?

Third, under "Wait for next Request", only the BIO connector says
"blocking". Does "Wait for next Request" really mean
wait-for-next-keepalive-request-on-the-same-connection? That's the only
thing that would make sense to me.

Fourth, the "SSL Handshake" says non-blocking for NIO but blocking for
the BIO and APR connectors. Does that mean that SSL handshaking with the
NIO connector is done in such a way that it does not tie-up a thread
from the pool for the entire SSL handshake and subsequent request?
Meaning that the thread(s) that handle the SSL handshake may not be the
same one(s) that begin processing the request itself?

Lastly, does anything change when Websocket is introduced into the mix?
For example, when a connection is upgraded from HTTP to Websocket, is
there another possibility for thread-switching or anything like that? Or
is the upgrade completely-handled by the request-processing thread that
was already assigned to handle the HTTP request? Also, (forgive my
Websocket ignorance) once the connection has been upgraded for a single
request, does it stay upgraded or is the next (keepalive) request
expected to be a regular HTTP request that can also be upgraded? In the
event that the request "stays upgraded", does the connection go back
into the request queue to be handled by another thread, or does the
current thread handle subsequent requests (e.g. BIO-style behavior,
regardless of connector).

I'm giving a talk at ApacheCon NA comparing the various connectors and
I'd like to build a couple of diagrams showing how threads are
allocated, cycled, etc. so the audience can get a better handle on where
the various efficiencies are for each, as well as what each
configuration setting can accomplish. I think I should be able to
re-write a lot of the Users' Guide section on connectors (a currently
mere 4 paragraphs) to help folks understand what the options are, why
they are available, and why they might want to use one over the other.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Rémy Maucherat <re...@apache.org>.
2014-03-25 15:57 GMT+01:00 Christopher Schultz <chris@christopherschultz.net
>:

> What about when an Executor is used, where the number of threads can
> fluctuate (up to a maximum) but are (or can be) also shared with other
> connectors?
>
> This is not really related, the connector stops using a thread when the
connection closes, so if there are two java.io connectors sharing one
executor, the thread count is the current connection count between the two
connectors.

Blocking on all io is a characteristic of java.io, and it's on its way to
deprecation for that reason. This limitation should be accepted and
embraced, attempts to work around it are mostly counter productive: the
connector doesn't become more efficient, but its performance goes down.

Rémy

Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/24/14, 1:08 PM, Mark Thomas wrote:
> On 24/03/2014 16:56, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/24/14, 5:37 AM, Mark Thomas wrote:
>>> On 24/03/2014 00:50, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>>>> Mark,
>>>>>
>>>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>>>> Mark,
>>>>>>>
>>>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>>>> All,
>>>>>>>>
>>>>>>>>> I'm looking at the comparison table at the bottom of
>>>>>>>>> the HTTP connectors page, and I have a few questions
>>>>>>>>> about it.
>>>>>>>>
>>>>>>>>> First, what does "Polling size" mean?
>>>>>>>>
>>>>>>>> Maximum number of connections in the poller. I'd
>>>>>>>> simply remove it from the table. It doesn't add
>>>>>>>> anything.
>>>>>>>
>>>>>>> Okay, thanks.
>>>>>>>
>>>>>>>>> Second, under the NIO connector, both "Read HTTP
>>>>>>>>> Body" and "Write HTTP Response" say that they are 
>>>>>>>>> "sim-Blocking"... does that mean that the API itself
>>>>>>>>> is stream-based (i.e. blocking) but that the actual 
>>>>>>>>> under-the-covers behavior is to use non-blocking
>>>>>>>>> I/O?
>>>>>>>>
>>>>>>>> It means simulated blocking. The low level writes use a
>>>>>>>>  non-blocking API but blocking is simulated by not
>>>>>>>> returning to the caller until the write completes.
>>>>>>>
>>>>>>> That's what I was thinking. Thanks for confirming.
>>>>>
>>>>>> Another quick question: during the sim-blocking for reading
>>>>>> the request-body, does the request go back into the poller
>>>>>> queue, or does it just sit waiting single-threaded-style? I
>>>>>> would assume that latter, otherwise we'd either violate the
>>>>>> spec (one thread serves the whole request) or spend a lot
>>>>>> of resources making sure we got the same thread back, etc.
>>>>>
>>>>> Both.
>>>>>
>>>>> The socket gets added to the BlockPoller and the thread waits
>>>>> on a latch for the BlockPoller to data can be read.
>>>
>>>> Okay, but it's still one-thread-one-request... /The/ thread
>>>> will stay with that request until its complete, right? The
>>>> BlockPoller will just wake-up the same waiting thread.. no
>>>> funny-business? ;)
>>>
>>> Correct.
>>>
>>>> Okay, one more related question: for the BIO connector, does
>>>> the request/connection go back into any kind of queue after
>>>> the initial (keep-alive) request has completed, or does the
>>>> thread that has already processed the first request on the
>>>> connection keep going until there are no more keep-alive
>>>> requests? I can't see a mechanism in the BIO connector to
>>>> ensure any kind of fairness with respect to request priority:
>>>> once the client is in, it can make as many requests as it wants
>>>> (up to maxKeepAliveRequests) without getting back in line.
>>>
>>> Correct. Although keep in mind that for BIO it doesn't make sense
>>> to have connections > threads so it really comes down to how the
>>> threads are scheduled for processing.
> 
>> Understood, but there are say 1000 connections waiting in the
>> accept queue and only 250 threads available, if my connection gets
>> accept()ed, then I get to make as many requests as I want without
>> having to get back in line. Yes, I ave to compete for CPU time with
>> the other 249 threads, but I don't have to wait in the
>> 1000-connection-long line.
> 
> I knew something was bugging me about this.
> 
> You need to look at the end of the while loop in
> AbstractHttp11Processor.process() and the call to breakKeepAliveLoop()
> 
> What happens is that if there is no evidence of a pipelined request at
> that point, the socket goes back into the socket/processor map and the
> thread is used to process another socket so you can end up with more
> concurrent connections than threads but only if you explicitly set
> maxConnections > maxThreads which I would maintain is a bad idea for
> BIO anyway as you can end up with some threads waiting huge amounts of
> time to be processed.

s/some threads/some connections/?

So the BIO connector actually attempts to enforce some "fairness"
amongst pipelined requests? But pipelined requests are very likely to
include .. shall we say "prompt"(?) additional requests, therefore the
fairness will not be very likely? And in the event(s) that there is a
pipeline stall, the connection may be unfairly ignored for a while
whilst the other connections are serviced to completion?

> Given that this feature offers little/no benefit at the price of
> having to run through a whole pile of code only to end up back where
> you started, I'm tempted to hard-code the return value of
> breakKeepAliveLoop() to false for BIO HTTP.

So your suggestion is that BIO fairness should be removed, so the the
situation I described above is actually the case: pipelined requests are
no longer fairly-scheduled amongst all connections vieing for attention?

When faced with the decision between unfair (priority) pipeline
processing and negatively unfair (starvation) pipeline processing, I
think I prefer the former. Most (non-malicious) clients don't make too
many pipelined requests, anyway. MaxKepAliveRequests can be used to
thwart that kind of DOS.

> Rémy Mucharat said:
> Yes please [that's how it used to be]. The rule for that connector is one
> thread <-> one connection, that's its only way of doing something useful
> for some users.

What about when an Executor is used, where the number of threads can
fluctuate (up to a maximum) but are (or can be) also shared with other
connectors?

-chris


Re: Connectors, blocking, and keepalive

Posted by Rémy Maucherat <re...@apache.org>.
2014-03-24 18:08 GMT+01:00 Mark Thomas <ma...@apache.org>:

> Given that this feature offers little/no benefit at the price of
> having to run through a whole pile of code only to end up back where
> you started, I'm tempted to hard-code the return value of
> breakKeepAliveLoop() to false for BIO HTTP.
>
> Yes please [that's how it used to be]. The rule for that connector is one
thread <-> one connection, that's its only way of doing something useful
for some users.

Rémy

Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 24/03/2014 16:56, Christopher Schultz wrote:
> Mark,
> 
> On 3/24/14, 5:37 AM, Mark Thomas wrote:
>> On 24/03/2014 00:50, Christopher Schultz wrote:
>>> Mark,
>> 
>>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>>> Mark,
>>>> 
>>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>>> Mark,
>>>>>> 
>>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>>> All,
>>>>>>> 
>>>>>>>> I'm looking at the comparison table at the bottom of
>>>>>>>> the HTTP connectors page, and I have a few questions
>>>>>>>> about it.
>>>>>>> 
>>>>>>>> First, what does "Polling size" mean?
>>>>>>> 
>>>>>>> Maximum number of connections in the poller. I'd
>>>>>>> simply remove it from the table. It doesn't add
>>>>>>> anything.
>>>>>> 
>>>>>> Okay, thanks.
>>>>>> 
>>>>>>>> Second, under the NIO connector, both "Read HTTP
>>>>>>>> Body" and "Write HTTP Response" say that they are 
>>>>>>>> "sim-Blocking"... does that mean that the API itself
>>>>>>>> is stream-based (i.e. blocking) but that the actual 
>>>>>>>> under-the-covers behavior is to use non-blocking
>>>>>>>> I/O?
>>>>>>> 
>>>>>>> It means simulated blocking. The low level writes use a
>>>>>>>  non-blocking API but blocking is simulated by not
>>>>>>> returning to the caller until the write completes.
>>>>>> 
>>>>>> That's what I was thinking. Thanks for confirming.
>>>> 
>>>>> Another quick question: during the sim-blocking for reading
>>>>> the request-body, does the request go back into the poller
>>>>> queue, or does it just sit waiting single-threaded-style? I
>>>>> would assume that latter, otherwise we'd either violate the
>>>>> spec (one thread serves the whole request) or spend a lot
>>>>> of resources making sure we got the same thread back, etc.
>>>> 
>>>> Both.
>>>> 
>>>> The socket gets added to the BlockPoller and the thread waits
>>>> on a latch for the BlockPoller to data can be read.
>> 
>>> Okay, but it's still one-thread-one-request... /The/ thread
>>> will stay with that request until its complete, right? The
>>> BlockPoller will just wake-up the same waiting thread.. no
>>> funny-business? ;)
>> 
>> Correct.
>> 
>>> Okay, one more related question: for the BIO connector, does
>>> the request/connection go back into any kind of queue after
>>> the initial (keep-alive) request has completed, or does the
>>> thread that has already processed the first request on the
>>> connection keep going until there are no more keep-alive
>>> requests? I can't see a mechanism in the BIO connector to
>>> ensure any kind of fairness with respect to request priority:
>>> once the client is in, it can make as many requests as it wants
>>> (up to maxKeepAliveRequests) without getting back in line.
>> 
>> Correct. Although keep in mind that for BIO it doesn't make sense
>> to have connections > threads so it really comes down to how the
>> threads are scheduled for processing.
> 
> Understood, but there are say 1000 connections waiting in the
> accept queue and only 250 threads available, if my connection gets
> accept()ed, then I get to make as many requests as I want without
> having to get back in line. Yes, I ave to compete for CPU time with
> the other 249 threads, but I don't have to wait in the
> 1000-connection-long line.

I knew something was bugging me about this.

You need to look at the end of the while loop in
AbstractHttp11Processor.process() and the call to breakKeepAliveLoop()

What happens is that if there is no evidence of a pipelined request at
that point, the socket goes back into the socket/processor map and the
thread is used to process another socket so you can end up with more
concurrent connections than threads but only if you explicitly set
maxConnections > maxThreads which I would maintain is a bad idea for
BIO anyway as you can end up with some threads waiting huge amounts of
time to be processed.

Given that this feature offers little/no benefit at the price of
having to run through a whole pile of code only to end up back where
you started, I'm tempted to hard-code the return value of
breakKeepAliveLoop() to false for BIO HTTP.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTMGaJAAoJEBDAHFovYFnn0lQP/A4TyL3Xqp/dd4nYJxtP1lXT
omQfbVHYI61Qb1DZDLxjRmM4/9Qs1YUEImmyJLtG1YE7XqeiJhp7bcg4K8BOXKP1
V2Di9cqiRo4mFxmOSsk/86Gad0lnRafc+MetepOATpaDrSTYlCrkGpyjuNKHfbai
nILsSiUGV1qlG/XPteJUrG5SwyphdUyKA2HpnPnMsYG5p4aO2Gj8e3tpF1eoKXSK
IX1PEVxY5ur2UyZrX7Gz4ulz7DKtJN/w7r2iscR3ItxGgl3K6bBcWd6EaUKraCKW
iBsPbFxzQe2AH0iPil6P+HCMenDpsc8D246FrIfL492hYcN8Zcui0EfwmpAcxFg9
M2yVS0X97vjo/L62OuQlj8WXOvCILlaeyh1zW8cjuz2ABw/loczc0WBZFVl7vkJe
me58M38Eo0/jMZ8SFy+t9OREUXPY721l0+/I8h0ded57lsgrXXxTIdB8kT0YV2Ru
XIaPrZafUg7rq413UC0lcSj6mhLwMtS/rusHwDY/RMLsx/1Wvyr1N4K0knDl16iy
PMB5sEEKd/VmW4a1f9ZxBvb9/TmY/cPZxQ1p/hNi8QTkRyTDwA8bta+KKsjfG/Du
drNDweML7AcI1X14PTqWgG/kNGVA+0YLvcgPeZPS021HTETzGAzcn93jT1xG15dU
06RFVeURXSNQsuMpWANR
=Qv1g
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/24/14, 5:37 AM, Mark Thomas wrote:
> On 24/03/2014 00:50, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>> Mark,
>>>>>
>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>> All,
>>>>>>
>>>>>>> I'm looking at the comparison table at the bottom of the 
>>>>>>> HTTP connectors page, and I have a few questions about
>>>>>>> it.
>>>>>>
>>>>>>> First, what does "Polling size" mean?
>>>>>>
>>>>>> Maximum number of connections in the poller. I'd simply
>>>>>> remove it from the table. It doesn't add anything.
>>>>>
>>>>> Okay, thanks.
>>>>>
>>>>>>> Second, under the NIO connector, both "Read HTTP Body"
>>>>>>> and "Write HTTP Response" say that they are
>>>>>>> "sim-Blocking"... does that mean that the API itself is
>>>>>>> stream-based (i.e. blocking) but that the actual
>>>>>>> under-the-covers behavior is to use non-blocking I/O?
>>>>>>
>>>>>> It means simulated blocking. The low level writes use a 
>>>>>> non-blocking API but blocking is simulated by not returning
>>>>>> to the caller until the write completes.
>>>>>
>>>>> That's what I was thinking. Thanks for confirming.
>>>
>>>> Another quick question: during the sim-blocking for reading the
>>>>  request-body, does the request go back into the poller queue,
>>>> or does it just sit waiting single-threaded-style? I would
>>>> assume that latter, otherwise we'd either violate the spec (one
>>>> thread serves the whole request) or spend a lot of resources
>>>> making sure we got the same thread back, etc.
>>>
>>> Both.
>>>
>>> The socket gets added to the BlockPoller and the thread waits on
>>> a latch for the BlockPoller to data can be read.
> 
>> Okay, but it's still one-thread-one-request... /The/ thread will
>> stay with that request until its complete, right? The BlockPoller
>> will just wake-up the same waiting thread.. no funny-business? ;)
> 
> Correct.
> 
>> Okay, one more related question: for the BIO connector, does the 
>> request/connection go back into any kind of queue after the
>> initial (keep-alive) request has completed, or does the thread that
>> has already processed the first request on the connection keep
>> going until there are no more keep-alive requests? I can't see a
>> mechanism in the BIO connector to ensure any kind of fairness with
>> respect to request priority: once the client is in, it can make as
>> many requests as it wants (up to maxKeepAliveRequests) without
>> getting back in line.
> 
> Correct. Although keep in mind that for BIO it doesn't make sense to
> have connections > threads so it really comes down to how the threads
> are scheduled for processing.

Understood, but there are say 1000 connections waiting in the accept
queue and only 250 threads available, if my connection gets accept()ed,
then I get to make as many requests as I want without having to get back
in line. Yes, I ave to compete for CPU time with the other 249 threads,
but I don't have to wait in the 1000-connection-long line.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 24/03/2014 00:50, Christopher Schultz wrote:
> Mark,
> 
> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>> Mark,
>> 
>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>> Mark,
>>>> 
>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>> All,
>>>>> 
>>>>>> I'm looking at the comparison table at the bottom of the 
>>>>>> HTTP connectors page, and I have a few questions about
>>>>>> it.
>>>>> 
>>>>>> First, what does "Polling size" mean?
>>>>> 
>>>>> Maximum number of connections in the poller. I'd simply
>>>>> remove it from the table. It doesn't add anything.
>>>> 
>>>> Okay, thanks.
>>>> 
>>>>>> Second, under the NIO connector, both "Read HTTP Body"
>>>>>> and "Write HTTP Response" say that they are
>>>>>> "sim-Blocking"... does that mean that the API itself is
>>>>>> stream-based (i.e. blocking) but that the actual
>>>>>> under-the-covers behavior is to use non-blocking I/O?
>>>>> 
>>>>> It means simulated blocking. The low level writes use a 
>>>>> non-blocking API but blocking is simulated by not returning
>>>>> to the caller until the write completes.
>>>> 
>>>> That's what I was thinking. Thanks for confirming.
>> 
>>> Another quick question: during the sim-blocking for reading the
>>>  request-body, does the request go back into the poller queue,
>>> or does it just sit waiting single-threaded-style? I would
>>> assume that latter, otherwise we'd either violate the spec (one
>>> thread serves the whole request) or spend a lot of resources
>>> making sure we got the same thread back, etc.
>> 
>> Both.
>> 
>> The socket gets added to the BlockPoller and the thread waits on
>> a latch for the BlockPoller to data can be read.
> 
> Okay, but it's still one-thread-one-request... /The/ thread will
> stay with that request until its complete, right? The BlockPoller
> will just wake-up the same waiting thread.. no funny-business? ;)

Correct.

> Okay, one more related question: for the BIO connector, does the 
> request/connection go back into any kind of queue after the
> initial (keep-alive) request has completed, or does the thread that
> has already processed the first request on the connection keep
> going until there are no more keep-alive requests? I can't see a
> mechanism in the BIO connector to ensure any kind of fairness with
> respect to request priority: once the client is in, it can make as
> many requests as it wants (up to maxKeepAliveRequests) without
> getting back in line.

Correct. Although keep in mind that for BIO it doesn't make sense to
have connections > threads so it really comes down to how the threads
are scheduled for processing.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTL/zwAAoJEBDAHFovYFnn7ZwQALSXf/WzpzXd1hj/TdfUCSlI
e7m6vMP0EdzTG5WV1GcnWb4I/votVJEENhr1ApB+kMc00qrnvOxu/YPMaNjkd7J+
CqajYOYEobuWt4UAqGSk9QyLq9bjKNyzG8jN+q2AY3mVjjM019RzQhP2Wf3AdjOW
v+Nu9j+A32vay/UcutEzxGvVEtmHTqW70B9o+43SqPuplJLzb6rGooq8JICsDn5g
agTUynLqZEgxHyJ5d7b+ZnXcsFRcchfyZqNCDOCo7ULqS6y9jaqUZSrq8hDvOjMi
6LNH/mk6QVPuii3j0wZ8kmJFgK6Tb1DID6+gx7Xw8CHfmxi0P4Cf6L87CYMFo7AO
dRB1IE5WeuRjxXlGS197NZ+l+fBQe24UNFw+Z0Uy38yqpIFjzvdxsZXihJGT6j2+
m4d01GJc4vbZR9le8BJuVLrb5rT7Dmk2Tg0nJmOHMmoGk/yioJ2/2pR+HqNAr9Uw
cq1+qvS+773rGNm1z4ULcV0S5cpWikUIoQa+v17kfBDVzJiCY1HGJfJM29kLp8z+
M4KnyeACRcPu0RUZqV6DStd6am6uRZ3l3nQFRyBTKdW8lsSwjx3XOBQGC5k0yNZ7
z6O1mdFQH1+4i6hfWoTqPsjq85V/+BxEwNdXYNJBF0OSgAqOTHRKpxgIy3TIi4M2
AyXj6QGYgkXTnCKNTynL
=O1Wm
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/23/14, 6:12 PM, Mark Thomas wrote:
> On 23/03/2014 22:07, Christopher Schultz wrote:
>> Mark,
> 
>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>> Mark,
>>>
>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>> All,
>>>>
>>>>> I'm looking at the comparison table at the bottom of the
>>>>> HTTP connectors page, and I have a few questions about it.
>>>>
>>>>> First, what does "Polling size" mean?
>>>>
>>>> Maximum number of connections in the poller. I'd simply remove
>>>> it from the table. It doesn't add anything.
>>>
>>> Okay, thanks.
>>>
>>>>> Second, under the NIO connector, both "Read HTTP Body" and
>>>>> "Write HTTP Response" say that they are "sim-Blocking"...
>>>>> does that mean that the API itself is stream-based (i.e.
>>>>> blocking) but that the actual under-the-covers behavior is to
>>>>> use non-blocking I/O?
>>>>
>>>> It means simulated blocking. The low level writes use a
>>>> non-blocking API but blocking is simulated by not returning to
>>>> the caller until the write completes.
>>>
>>> That's what I was thinking. Thanks for confirming.
> 
>> Another quick question: during the sim-blocking for reading the 
>> request-body, does the request go back into the poller queue, or
>> does it just sit waiting single-threaded-style? I would assume that
>> latter, otherwise we'd either violate the spec (one thread serves
>> the whole request) or spend a lot of resources making sure we got
>> the same thread back, etc.
> 
> Both.
> 
> The socket gets added to the BlockPoller and the thread waits on a
> latch for the BlockPoller to data can be read.

Okay, but it's still one-thread-one-request... /The/ thread will stay
with that request until its complete, right? The BlockPoller will just
wake-up the same waiting thread.. no funny-business? ;)

Okay, one more related question: for the BIO connector, does the
request/connection go back into any kind of queue after the initial
(keep-alive) request has completed, or does the thread that has already
processed the first request on the connection keep going until there are
no more keep-alive requests? I can't see a mechanism in the BIO
connector to ensure any kind of fairness with respect to request
priority: once the client is in, it can make as many requests as it
wants (up to maxKeepAliveRequests) without getting back in line.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 23/03/2014 22:07, Christopher Schultz wrote:
> Mark,
> 
> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>> Mark,
>> 
>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>> All,
>>> 
>>>> I'm looking at the comparison table at the bottom of the
>>>> HTTP connectors page, and I have a few questions about it.
>>> 
>>>> First, what does "Polling size" mean?
>>> 
>>> Maximum number of connections in the poller. I'd simply remove
>>> it from the table. It doesn't add anything.
>> 
>> Okay, thanks.
>> 
>>>> Second, under the NIO connector, both "Read HTTP Body" and
>>>> "Write HTTP Response" say that they are "sim-Blocking"...
>>>> does that mean that the API itself is stream-based (i.e.
>>>> blocking) but that the actual under-the-covers behavior is to
>>>> use non-blocking I/O?
>>> 
>>> It means simulated blocking. The low level writes use a
>>> non-blocking API but blocking is simulated by not returning to
>>> the caller until the write completes.
>> 
>> That's what I was thinking. Thanks for confirming.
> 
> Another quick question: during the sim-blocking for reading the 
> request-body, does the request go back into the poller queue, or
> does it just sit waiting single-threaded-style? I would assume that
> latter, otherwise we'd either violate the spec (one thread serves
> the whole request) or spend a lot of resources making sure we got
> the same thread back, etc.

Both.

The socket gets added to the BlockPoller and the thread waits on a
latch for the BlockPoller to data can be read.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIbBAEBAgAGBQJTL1xiAAoJEBDAHFovYFnnJGcP9jBIyVGXlYL8pSVMzMNvf1dd
t6d66bajcWMTINmnCcXOzowdnBpDQHyIPKaS0U7RjmBpbOGrK0r+rfVBqkFNPcPR
J9ivXJeZHHgRFVHFyfanBKUwWGGYcFKQuLBfd9vzai2bAyX3/Le0NvZc0/c+/PAA
FPJPDVOUNtN57GKUa+VWJ0Hm7U9YH1VufcvNp/ULNnzkeeg0pnpa8aXroxdtMqw2
j65K3C9O8EQyYU3AzcVMlaxmP+0bGyhCBK3gWb/ZXAh2+0E/14zrBKVqNnRjxo8c
zAPjN79BY+xQ6Un4gEb/XInPFekUlh+IQRSQy7IZ9gmHAmfF/HQ73fEMyS5D6QJ4
Ezs8+K56QniZLE2funSvHX3VWCUyqh/lCYMi0u8RuZw7xOrwsKVK37pmPpk8xDAc
jWcKASOaA4nLDOypb8ys7KNhZSMWLxwcIyLTT8Ck7BDX4PWrE3bPly2cJ2GAkd4v
slRLMuoddMziKgG0dJyi4lpMkR4FQPU1NVS8d+ohoUccfbYSVNM3cLPCOeVJjdeC
ywvhVgKvUItESvuOuhTdyx/sYjA6UJ9bWl1esYh6CVBFQqpnTIsK499ORqJGcosI
N6l2XBIiRhvW3EuF1moppYXX6rUtCz8m+9MWmlpiB6TSU6bI9fu48xFx0JvoN+dD
jruU5ZNKVlRYAYbIh+Y=
=21mp
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 2/27/14, 12:56 PM, Christopher Schultz wrote:
> Mark,
> 
> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>> All,
>>
>>> I'm looking at the comparison table at the bottom of the HTTP
>>> connectors page, and I have a few questions about it.
>>
>>> First, what does "Polling size" mean?
>>
>> Maximum number of connections in the poller. I'd simply remove it from
>> the table. It doesn't add anything.
> 
> Okay, thanks.
> 
>>> Second, under the NIO connector, both "Read HTTP Body" and "Write
>>> HTTP Response" say that they are "sim-Blocking"... does that mean
>>> that the API itself is stream-based (i.e. blocking) but that the
>>> actual under-the-covers behavior is to use non-blocking I/O?
>>
>> It means simulated blocking. The low level writes use a non-blocking
>> API but blocking is simulated by not returning to the caller until the
>> write completes.
> 
> That's what I was thinking. Thanks for confirming.

Another quick question: during the sim-blocking for reading the
request-body, does the request go back into the poller queue, or does it
just sit waiting single-threaded-style? I would assume that latter,
otherwise we'd either violate the spec (one thread serves the whole
request) or spend a lot of resources making sure we got the same thread
back, etc.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 27/02/2014 17:56, Christopher Schultz wrote:
> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>> On 25/02/2014 06:03, Christopher Schultz wrote:

>>> It is important to make that distinction since the end user
>>> (the code) can't tell the difference?
>> 
>> The end user shouldn't be able to tell the difference. It is
>> important and it indicates that there is some overhead associated
>> with the process.
> 
> Aah, okay. A "true" blocking read or write would be more efficient,
> but you can't have both blocking and non-blocking operations on a
> connection after it's been established?

Java NIO provides no way of doing a true blocking read.

>>> Fourth, the "SSL Handshake" says non-blocking for NIO but
>>> blocking for the BIO and APR connectors. Does that mean that
>>> SSL handshaking with the NIO connector is done in such a way
>>> that it does not tie-up a thread from the pool for the entire
>>> SSL handshake and subsequent request? Meaning that the
>>> thread(s) that handle the SSL handshake may not be the same
>>> one(s) that begin processing the request itself?
>> 
>> Correct. Once request processing starts (i.e. after the request 
>> headers have been read) the same thread is used. Up to that
>> point, different threads may be used as the input is read (with
>> the NIO connector) using non-blocking IO.
> 
> Good. Are there multiple stages of SSL handshaking (I know there
> are at the TCP/IP and SSL level themselves -- I mean in the Java
> code to set it up) where multiple threads could participate --
> serially, of course -- in the handshake? I want to develop a
> pipeline diagram and want to make sure it's accurate. If the
> (current) reality is that a single thread does the SSL handshake
> and then another thread (possibly the same one) handles the actual
> request, then the diagram will be simpler.

There are multiple stages in the handshake but as far as Tomcat is
concerned is does these:

start handshake
while (need to read more data to complete handshake) {
  read data
  try and do more of the handshake
}

Each iteration of that loop may be handled by a different thread (with
the socket going back to the poller if there is no data available at
the moment). So it could be one thread, it could be as many threads as
there are bytes in the handshake.

> Let me take this opportunity to mention that while I could go read
> the code, I've never used Java's NIO package and would probably
> spend a lot of time figuring out basic things instead of answering
> the higher-level questions I'd like to handle, here. Not to mention
> that the connector-related code is more complicated than one would
> expect given the fairly small perceived set of requirements they
> have (i.e. take an incoming connection and allocate a thread, then
> dispatch). It's obviously far more complicated than that and there
> is a lot of code to handle some very esoteric requirements, etc.
> 
> I appreciate you taking the time to answer directly instead of 
> recommending that I read the code. You are saving me an enormous
> amount of time. ;)

I was tempted to say go and read the code but I know from experience
that is a time consuming task. The refactoring I did to reduce code
duplication was immensely instructive. I still get lost in that code
sometimes but it happens a lot less often.

>> The upgrade process is handled by the request processing thread
>> but once the upgrade is complete (i.e. the 101 response has been
>> returned) that thread returns to the pool.
> 
> Okay, so the upgrade occurs and the remainder of the request gets 
> re-queued. Or, rather, a thread is re-assigned when an IO event
> occurs.

Correct.

> Is there any priority assigned to events, or are they processed 
> essentially serially, in the order that they occurred -- that is, 
> dispatched to threads from the pool in the order that the IO events
> arrived?

It is the same poller as for the HTTP connections. Roughly they'll be
processed in arrival order but there may be a little re-ordering. It
depends on the behaviour of the selector.

>>> Also, (forgive my Websocket ignorance) once the connection has
>>> been upgraded for a single request, does it stay upgraded or is
>>> the next (keepalive) request expected to be a regular HTTP
>>> request that can also be upgraded?
>> 
>> The upgrade is permanent. When the WebSocket processing ends,
>> the socket is closed.
> 
> Okay, so if a client played its cards right, it could send a
> traditional HTTP request with keepalive, make several more requests
> over the same connection, and then finally upgrade to Websocket for
> the final request. After that, the connection is terminated
> entirely.

Yes.

> There is an implication there that if you want to use Websocket,
> don't use it for tiny request/response activities because
> performance will actually drop. One would be foolish to "replace"
> plain-old HTTP with Websocket but try to treat them the same.

Lots of tiny request responses over a long period of time would be
fine (and more efficient that HTTP). For a single request there is no
point switching to WebSocket.

The real benefit of WebSocket is that it is true two-way
communication. It is not limited to request, response, request,
response, etc.

>>> In the event that the request "stays upgraded", does the
>>> connection go back into the request queue to be handled by
>>> another thread, or does the current thread handle subsequent
>>> requests (e.g. BIO-style behavior, regardless of connector).
>> 
>> Either. It depends how the upgrade handler is written. WebSocket
>> uses Servlet 3.1 NIO so everything becomes non-blocking.
> 
> I think you answered this question above: the connection is closed 
> entirely, so there will never be another "next request" on that 
> connection, right?

For any upgraded connection that is correct. There is no way to
downgrade back to HTTP (at least not in the Servlet API anyway).

>>> I'm giving a talk at ApacheCon NA comparing the various
>>> connectors and I'd like to build a couple of diagrams showing
>>> how threads are allocated, cycled, etc. so the audience can get
>>> a better handle on where the various efficiencies are for each,
>>> as well as what each configuration setting can accomplish. I
>>> think I should be able to re-write a lot of the Users' Guide
>>> section on connectors (a currently mere 4 paragraphs) to help
>>> folks understand what the options are, why they are available,
>>> and why they might want to use one over the other.
>> 
>> I'd really encourage you to spend some time poking around in the 
>> low-level connector code debugging a few sample requests through
>> the process.
> 
> I will definitely do that, but I wanted to get a mental framework
> before I did. There's a lot of code in there... even the BIO
> connector isn't as fall-off-a-log simple as one might expect.

Some of that complexity is so we can share code between the
connectors. Some if it is there because we need to simulate
non-blocking (e.g. for WebSocket) and some of it is just old code that
needs cleaning up :)

I'll be sure to sit in on your talk. I'll try not to heckle too much
;) You can return the favour for my talks.

Mark


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTD5hfAAoJEBDAHFovYFnnrsEP/2J6V+YLKqq+ygSLbiHooEWS
qYiu8D1dG1Y2s0fpZjCK5gjrerAEoRZtGQgjbvR9ozb5Y0J+NU1H5OPePyR16HuG
Y0bgSMXQgTBbVgSe4xvZLACmsO6BH+XXY8xijnX/ayghd2js1BiOmdIhnnL9b+MY
uDfsc7mPy3cqj4bk98JqhtV3Z/SRuLh/F9xrcdkHPUhSjvw7Vb70tOKLjBbh9q1N
dK8y/9G1k31VbGV+JzHlWOFsj04yzXdcQVENmRLADvVt7CtFhupOByd+FApl8CjF
Mi+Hu4JUgG1S1QbEcLaLwdF4ACcGPPQ07bcc311eTl2cRHA/k0TeCLBrKAQsWcs/
qsiFT6gMq+Y9UY+4urvCYPZyfFzQbGGOR7hm2BGBXXUugNbvi6gyjLIhR4vQmpx2
wD11jh0DOOu6sCxbLn7TWLztzDd19VinJtT/lF/JXbNvZy/R8/MY60iST/pPpOgN
zXeHXMmH9+3sM+nPrNuSCoJhP1A7yxSFntlBaZpoRxbvKW40kFUEEaDwODRYOkcQ
lT9maq9GYfFUL4qau5DDEE9CUAT6E0Hb1bA2LYOPY7YmVLVHrzAcGo3j7evJ+XgE
6hAPBIBkZdCXvr4eWzlVXe223w0GdSdSKHjszzMxrOVNSaF+gZp83prnoIaI5KvI
X4tCzCsqMR6pC31seVWU
=bDkk
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 2/25/14, 3:31 AM, Mark Thomas wrote:
> On 25/02/2014 06:03, Christopher Schultz wrote:
>> All,
> 
>> I'm looking at the comparison table at the bottom of the HTTP
>> connectors page, and I have a few questions about it.
> 
>> First, what does "Polling size" mean?
> 
> Maximum number of connections in the poller. I'd simply remove it from
> the table. It doesn't add anything.

Okay, thanks.

>> Second, under the NIO connector, both "Read HTTP Body" and "Write
>> HTTP Response" say that they are "sim-Blocking"... does that mean
>> that the API itself is stream-based (i.e. blocking) but that the
>> actual under-the-covers behavior is to use non-blocking I/O?
> 
> It means simulated blocking. The low level writes use a non-blocking
> API but blocking is simulated by not returning to the caller until the
> write completes.

That's what I was thinking. Thanks for confirming.

>> It is important to make that distinction since the end user (the
>> code) can't tell the difference?
> 
> The end user shouldn't be able to tell the difference. It is important
> and it indicates that there is some overhead associated with the process.

Aah, okay. A "true" blocking read or write would be more efficient, but
you can't have both blocking and non-blocking operations on a connection
after it's been established?

>> Third, under "Wait for next Request", only the BIO connector says 
>> "blocking". Does "Wait for next Request" really mean 
>> wait-for-next-keepalive-request-on-the-same-connection? That's the
>> only thing that would make sense to me.
> 
> Correct.

Good.

>> Fourth, the "SSL Handshake" says non-blocking for NIO but blocking
>> for the BIO and APR connectors. Does that mean that SSL handshaking
>> with the NIO connector is done in such a way that it does not
>> tie-up a thread from the pool for the entire SSL handshake and
>> subsequent request? Meaning that the thread(s) that handle the SSL
>> handshake may not be the same one(s) that begin processing the
>> request itself?
> 
> Correct. Once request processing starts (i.e. after the request
> headers have been read) the same thread is used. Up to that point,
> different threads may be used as the input is read (with the NIO
> connector) using non-blocking IO.

Good. Are there multiple stages of SSL handshaking (I know there are at
the TCP/IP and SSL level themselves -- I mean in the Java code to set it
up) where multiple threads could participate -- serially, of course --
in the handshake? I want to develop a pipeline diagram and want to make
sure it's accurate. If the (current) reality is that a single thread
does the SSL handshake and then another thread (possibly the same one)
handles the actual request, then the diagram will be simpler.

Let me take this opportunity to mention that while I could go read the
code, I've never used Java's NIO package and would probably spend a lot
of time figuring out basic things instead of answering the higher-level
questions I'd like to handle, here. Not to mention that the
connector-related code is more complicated than one would expect given
the fairly small perceived set of requirements they have (i.e. take an
incoming connection and allocate a thread, then dispatch). It's
obviously far more complicated than that and there is a lot of code to
handle some very esoteric requirements, etc.

I appreciate you taking the time to answer directly instead of
recommending that I read the code. You are saving me an enormous amount
of time. ;)

>> Lastly, does anything change when Websocket is introduced into the
>> mix?
> 
> Yes. Lots.
> 
>> For example, when a connection is upgraded from HTTP to Websocket,
>> is there another possibility for thread-switching or anything like
>> that?
> 
> Yes. Everything switches to non-blocking mode (or simulated
> non-blocking in the case of BIO).
> 
>> Or is the upgrade completely-handled by the request-processing
>> thread that was already assigned to handle the HTTP request?
> 
> The upgrade process is handled by the request processing thread but
> once the upgrade is complete (i.e. the 101 response has been returned)
> that thread returns to the pool.

Okay, so the upgrade occurs and the remainder of the request gets
re-queued. Or, rather, a thread is re-assigned when an IO event occurs.
Is there any priority assigned to events, or are they processed
essentially serially, in the order that they occurred -- that is,
dispatched to threads from the pool in the order that the IO events arrived?

>> Also, (forgive my Websocket ignorance) once the connection has been
>> upgraded for a single request, does it stay upgraded or is the next
>> (keepalive) request expected to be a regular HTTP request that can
>> also be upgraded?
> 
> The upgrade is permanent. When the WebSocket processing ends, the
> socket is closed.

Okay, so if a client played its cards right, it could send a traditional
HTTP request with keepalive, make several more requests over the same
connection, and then finally upgrade to Websocket for the final request.
After that, the connection is terminated entirely.

There is an implication there that if you want to use Websocket, don't
use it for tiny request/response activities because performance will
actually drop. One would be foolish to "replace" plain-old HTTP with
Websocket but try to treat them the same.

>> In the event that the request "stays upgraded", does the connection
>> go back into the request queue to be handled by another thread, or
>> does the current thread handle subsequent requests (e.g. BIO-style
>> behavior, regardless of connector).
> 
> Either. It depends how the upgrade handler is written. WebSocket uses
> Servlet 3.1 NIO so everything becomes non-blocking.

I think you answered this question above: the connection is closed
entirely, so there will never be another "next request" on that
connection, right?

>> I'm giving a talk at ApacheCon NA comparing the various connectors
>> and I'd like to build a couple of diagrams showing how threads are 
>> allocated, cycled, etc. so the audience can get a better handle on
>> where the various efficiencies are for each, as well as what each 
>> configuration setting can accomplish. I think I should be able to 
>> re-write a lot of the Users' Guide section on connectors (a
>> currently mere 4 paragraphs) to help folks understand what the
>> options are, why they are available, and why they might want to use
>> one over the other.
> 
> I'd really encourage you to spend some time poking around in the
> low-level connector code debugging a few sample requests through the
> process.

I will definitely do that, but I wanted to get a mental framework before
I did. There's a lot of code in there... even the BIO connector isn't as
fall-off-a-log simple as one might expect.

-chris


Re: Connectors, blocking, and keepalive

Posted by Konstantin Kolinko <kn...@gmail.com>.
2014-02-25 12:31 GMT+04:00 Mark Thomas <ma...@apache.org>:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 25/02/2014 06:03, Christopher Schultz wrote:
>> All,
>>
>> I'm looking at the comparison table at the bottom of the HTTP
>> connectors page, and I have a few questions about it.
>>
>> First, what does "Polling size" mean?
>
> Maximum number of connections in the poller. I'd simply remove it from
> the table. It doesn't add anything.
>
>> Second, under the NIO connector, both "Read HTTP Body" and "Write
>> HTTP Response" say that they are "sim-Blocking"... does that mean
>> that the API itself is stream-based (i.e. blocking) but that the
>> actual under-the-covers behavior is to use non-blocking I/O?
>
> It means simulated blocking. The low level writes use a non-blocking
> API but blocking is simulated by not returning to the caller until the
> write completes.

s/Sim/Simulated/ on the page

>
>> It is important to make that distinction since the end user (the
>> code) can't tell the difference?
>
> The end user shouldn't be able to tell the difference. It is important
> and it indicates that there is some overhead associated with the process.
>
>> Unless there is another thread pushing the bytes back to the client
>> for instance, the request-processing thread is tied-up performing
>> I/O whether it's doing blocking I/O or non-blocking I/O, right?
>
> Correct. (excluding sendFile, async, WebSocket, Comet)

It is worth adding those four (sendfile etc.) as rows into the table.

Best regards,
Konstantin Kolinko

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 25/02/2014 06:03, Christopher Schultz wrote:
> All,
> 
> I'm looking at the comparison table at the bottom of the HTTP
> connectors page, and I have a few questions about it.
> 
> First, what does "Polling size" mean?

Maximum number of connections in the poller. I'd simply remove it from
the table. It doesn't add anything.

> Second, under the NIO connector, both "Read HTTP Body" and "Write
> HTTP Response" say that they are "sim-Blocking"... does that mean
> that the API itself is stream-based (i.e. blocking) but that the
> actual under-the-covers behavior is to use non-blocking I/O?

It means simulated blocking. The low level writes use a non-blocking
API but blocking is simulated by not returning to the caller until the
write completes.

> It is important to make that distinction since the end user (the
> code) can't tell the difference?

The end user shouldn't be able to tell the difference. It is important
and it indicates that there is some overhead associated with the process.

> Unless there is another thread pushing the bytes back to the client
> for instance, the request-processing thread is tied-up performing 
> I/O whether it's doing blocking I/O or non-blocking I/O, right?

Correct. (excluding sendFile, async, WebSocket, Comet)

> Third, under "Wait for next Request", only the BIO connector says 
> "blocking". Does "Wait for next Request" really mean 
> wait-for-next-keepalive-request-on-the-same-connection? That's the
> only thing that would make sense to me.

Correct.

> Fourth, the "SSL Handshake" says non-blocking for NIO but blocking
> for the BIO and APR connectors. Does that mean that SSL handshaking
> with the NIO connector is done in such a way that it does not
> tie-up a thread from the pool for the entire SSL handshake and
> subsequent request? Meaning that the thread(s) that handle the SSL
> handshake may not be the same one(s) that begin processing the
> request itself?

Correct. Once request processing starts (i.e. after the request
headers have been read) the same thread is used. Up to that point,
different threads may be used as the input is read (with the NIO
connector) using non-blocking IO.

> Lastly, does anything change when Websocket is introduced into the
> mix?

Yes. Lots.

> For example, when a connection is upgraded from HTTP to Websocket,
> is there another possibility for thread-switching or anything like
> that?

Yes. Everything switches to non-blocking mode (or simulated
non-blocking in the case of BIO).

> Or is the upgrade completely-handled by the request-processing
> thread that was already assigned to handle the HTTP request?

The upgrade process is handled by the request processing thread but
once the upgrade is complete (i.e. the 101 response has been returned)
that thread returns to the pool.

> Also, (forgive my Websocket ignorance) once the connection has been
> upgraded for a single request, does it stay upgraded or is the next
> (keepalive) request expected to be a regular HTTP request that can
> also be upgraded?

The upgrade is permanent. When the WebSocket processing ends, the
socket is closed.

> In the event that the request "stays upgraded", does the connection
> go back into the request queue to be handled by another thread, or
> does the current thread handle subsequent requests (e.g. BIO-style
> behavior, regardless of connector).

Either. It depends how the upgrade handler is written. WebSocket uses
Servlet 3.1 NIO so everything becomes non-blocking.

> I'm giving a talk at ApacheCon NA comparing the various connectors
> and I'd like to build a couple of diagrams showing how threads are 
> allocated, cycled, etc. so the audience can get a better handle on
> where the various efficiencies are for each, as well as what each 
> configuration setting can accomplish. I think I should be able to 
> re-write a lot of the Users' Guide section on connectors (a
> currently mere 4 paragraphs) to help folks understand what the
> options are, why they are available, and why they might want to use
> one over the other.

I'd really encourage you to spend some time poking around in the
low-level connector code debugging a few sample requests through the
process.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTDFTxAAoJEBDAHFovYFnnoNsP/3LyTwEMrXKiFmpsNJE8XCQz
jodWqXzmLntm5dP3JydR9oXGHz4Bo3++E8FoTnoUb6D1cikdJcMt6WWVtWJegyQx
NmLDk8fxHRuAsHqvOW0PZRBxH/TkUwavNkS9nZJGNBUmm2N50LHU7kb/FNYGmVXR
qQcSwjRT2hV6C8gSMrAR3b9HgtTl9A+Ny/hoyynglkV99YjcEOMlBbXnSlVldAq+
f+byMRqSOx1YxEsAwHwyDxDnqZB52BI7Cm5INu46ntzlByIK1Hg6c6e0FYpdhnnA
8wNMOy42GtUhjDxcWTTpSVxA6O9XMEB8gG1AXZKBfkrL32sfwuENcu8Z+VLe6Ww2
XuBPdq/BvYm1cyn7CiHzwDb6ScSzxvO/hZTmqwkFD8pxq023uHMaggdwK45juO5l
NWpauRRAoWJ7fxHf12RqLji5E5bAfAjlxAcaQVDv8JLNY8CAv9Xo+FNb1NwMStmL
0PzYB8Nq0+/97XgxVxJHINdXIuvPj/+ZmKowicLZWlLZ5r27vzEZwonh1A0/VBDi
+t25dNY6Vts0H2KWnCF5gncH8N30d+52Nj029ao3kKFDV0gHymUuutoo5ySqdbWm
C795Dyx3qWJ71K1J451Bvw68HG8Gx4QZeRLwCgxcGhRFj/+PG4AZBd829sXci6bA
2lc+QR73ywTZ0Xo6FvjF
=Vefm
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org