You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Christopher Schultz <ch...@christopherschultz.net> on 2014/03/23 23:07:58 UTC

Re: Connectors, blocking, and keepalive

Mark,

On 2/27/14, 12:56 PM, Christopher Schultz wrote:
> Mark,
> 
> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>> All,
>>
>>> I'm looking at the comparison table at the bottom of the HTTP
>>> connectors page, and I have a few questions about it.
>>
>>> First, what does "Polling size" mean?
>>
>> Maximum number of connections in the poller. I'd simply remove it from
>> the table. It doesn't add anything.
> 
> Okay, thanks.
> 
>>> Second, under the NIO connector, both "Read HTTP Body" and "Write
>>> HTTP Response" say that they are "sim-Blocking"... does that mean
>>> that the API itself is stream-based (i.e. blocking) but that the
>>> actual under-the-covers behavior is to use non-blocking I/O?
>>
>> It means simulated blocking. The low level writes use a non-blocking
>> API but blocking is simulated by not returning to the caller until the
>> write completes.
> 
> That's what I was thinking. Thanks for confirming.

Another quick question: during the sim-blocking for reading the
request-body, does the request go back into the poller queue, or does it
just sit waiting single-threaded-style? I would assume that latter,
otherwise we'd either violate the spec (one thread serves the whole
request) or spend a lot of resources making sure we got the same thread
back, etc.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Rémy Maucherat <re...@apache.org>.
2014-03-24 18:08 GMT+01:00 Mark Thomas <ma...@apache.org>:

> Given that this feature offers little/no benefit at the price of
> having to run through a whole pile of code only to end up back where
> you started, I'm tempted to hard-code the return value of
> breakKeepAliveLoop() to false for BIO HTTP.
>
> Yes please [that's how it used to be]. The rule for that connector is one
thread <-> one connection, that's its only way of doing something useful
for some users.

Rémy

Re: Connectors, blocking, and keepalive

Posted by Rémy Maucherat <re...@apache.org>.
2014-03-25 15:57 GMT+01:00 Christopher Schultz <chris@christopherschultz.net
>:

> What about when an Executor is used, where the number of threads can
> fluctuate (up to a maximum) but are (or can be) also shared with other
> connectors?
>
> This is not really related, the connector stops using a thread when the
connection closes, so if there are two java.io connectors sharing one
executor, the thread count is the current connection count between the two
connectors.

Blocking on all io is a characteristic of java.io, and it's on its way to
deprecation for that reason. This limitation should be accepted and
embraced, attempts to work around it are mostly counter productive: the
connector doesn't become more efficient, but its performance goes down.

Rémy

Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/24/14, 1:08 PM, Mark Thomas wrote:
> On 24/03/2014 16:56, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/24/14, 5:37 AM, Mark Thomas wrote:
>>> On 24/03/2014 00:50, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>>>> Mark,
>>>>>
>>>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>>>> Mark,
>>>>>>>
>>>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>>>> All,
>>>>>>>>
>>>>>>>>> I'm looking at the comparison table at the bottom of
>>>>>>>>> the HTTP connectors page, and I have a few questions
>>>>>>>>> about it.
>>>>>>>>
>>>>>>>>> First, what does "Polling size" mean?
>>>>>>>>
>>>>>>>> Maximum number of connections in the poller. I'd
>>>>>>>> simply remove it from the table. It doesn't add
>>>>>>>> anything.
>>>>>>>
>>>>>>> Okay, thanks.
>>>>>>>
>>>>>>>>> Second, under the NIO connector, both "Read HTTP
>>>>>>>>> Body" and "Write HTTP Response" say that they are 
>>>>>>>>> "sim-Blocking"... does that mean that the API itself
>>>>>>>>> is stream-based (i.e. blocking) but that the actual 
>>>>>>>>> under-the-covers behavior is to use non-blocking
>>>>>>>>> I/O?
>>>>>>>>
>>>>>>>> It means simulated blocking. The low level writes use a
>>>>>>>>  non-blocking API but blocking is simulated by not
>>>>>>>> returning to the caller until the write completes.
>>>>>>>
>>>>>>> That's what I was thinking. Thanks for confirming.
>>>>>
>>>>>> Another quick question: during the sim-blocking for reading
>>>>>> the request-body, does the request go back into the poller
>>>>>> queue, or does it just sit waiting single-threaded-style? I
>>>>>> would assume that latter, otherwise we'd either violate the
>>>>>> spec (one thread serves the whole request) or spend a lot
>>>>>> of resources making sure we got the same thread back, etc.
>>>>>
>>>>> Both.
>>>>>
>>>>> The socket gets added to the BlockPoller and the thread waits
>>>>> on a latch for the BlockPoller to data can be read.
>>>
>>>> Okay, but it's still one-thread-one-request... /The/ thread
>>>> will stay with that request until its complete, right? The
>>>> BlockPoller will just wake-up the same waiting thread.. no
>>>> funny-business? ;)
>>>
>>> Correct.
>>>
>>>> Okay, one more related question: for the BIO connector, does
>>>> the request/connection go back into any kind of queue after
>>>> the initial (keep-alive) request has completed, or does the
>>>> thread that has already processed the first request on the
>>>> connection keep going until there are no more keep-alive
>>>> requests? I can't see a mechanism in the BIO connector to
>>>> ensure any kind of fairness with respect to request priority:
>>>> once the client is in, it can make as many requests as it wants
>>>> (up to maxKeepAliveRequests) without getting back in line.
>>>
>>> Correct. Although keep in mind that for BIO it doesn't make sense
>>> to have connections > threads so it really comes down to how the
>>> threads are scheduled for processing.
> 
>> Understood, but there are say 1000 connections waiting in the
>> accept queue and only 250 threads available, if my connection gets
>> accept()ed, then I get to make as many requests as I want without
>> having to get back in line. Yes, I ave to compete for CPU time with
>> the other 249 threads, but I don't have to wait in the
>> 1000-connection-long line.
> 
> I knew something was bugging me about this.
> 
> You need to look at the end of the while loop in
> AbstractHttp11Processor.process() and the call to breakKeepAliveLoop()
> 
> What happens is that if there is no evidence of a pipelined request at
> that point, the socket goes back into the socket/processor map and the
> thread is used to process another socket so you can end up with more
> concurrent connections than threads but only if you explicitly set
> maxConnections > maxThreads which I would maintain is a bad idea for
> BIO anyway as you can end up with some threads waiting huge amounts of
> time to be processed.

s/some threads/some connections/?

So the BIO connector actually attempts to enforce some "fairness"
amongst pipelined requests? But pipelined requests are very likely to
include .. shall we say "prompt"(?) additional requests, therefore the
fairness will not be very likely? And in the event(s) that there is a
pipeline stall, the connection may be unfairly ignored for a while
whilst the other connections are serviced to completion?

> Given that this feature offers little/no benefit at the price of
> having to run through a whole pile of code only to end up back where
> you started, I'm tempted to hard-code the return value of
> breakKeepAliveLoop() to false for BIO HTTP.

So your suggestion is that BIO fairness should be removed, so the the
situation I described above is actually the case: pipelined requests are
no longer fairly-scheduled amongst all connections vieing for attention?

When faced with the decision between unfair (priority) pipeline
processing and negatively unfair (starvation) pipeline processing, I
think I prefer the former. Most (non-malicious) clients don't make too
many pipelined requests, anyway. MaxKepAliveRequests can be used to
thwart that kind of DOS.

> Rémy Mucharat said:
> Yes please [that's how it used to be]. The rule for that connector is one
> thread <-> one connection, that's its only way of doing something useful
> for some users.

What about when an Executor is used, where the number of threads can
fluctuate (up to a maximum) but are (or can be) also shared with other
connectors?

-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 24/03/2014 16:56, Christopher Schultz wrote:
> Mark,
> 
> On 3/24/14, 5:37 AM, Mark Thomas wrote:
>> On 24/03/2014 00:50, Christopher Schultz wrote:
>>> Mark,
>> 
>>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>>> Mark,
>>>> 
>>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>>> Mark,
>>>>>> 
>>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>>> All,
>>>>>>> 
>>>>>>>> I'm looking at the comparison table at the bottom of
>>>>>>>> the HTTP connectors page, and I have a few questions
>>>>>>>> about it.
>>>>>>> 
>>>>>>>> First, what does "Polling size" mean?
>>>>>>> 
>>>>>>> Maximum number of connections in the poller. I'd
>>>>>>> simply remove it from the table. It doesn't add
>>>>>>> anything.
>>>>>> 
>>>>>> Okay, thanks.
>>>>>> 
>>>>>>>> Second, under the NIO connector, both "Read HTTP
>>>>>>>> Body" and "Write HTTP Response" say that they are 
>>>>>>>> "sim-Blocking"... does that mean that the API itself
>>>>>>>> is stream-based (i.e. blocking) but that the actual 
>>>>>>>> under-the-covers behavior is to use non-blocking
>>>>>>>> I/O?
>>>>>>> 
>>>>>>> It means simulated blocking. The low level writes use a
>>>>>>>  non-blocking API but blocking is simulated by not
>>>>>>> returning to the caller until the write completes.
>>>>>> 
>>>>>> That's what I was thinking. Thanks for confirming.
>>>> 
>>>>> Another quick question: during the sim-blocking for reading
>>>>> the request-body, does the request go back into the poller
>>>>> queue, or does it just sit waiting single-threaded-style? I
>>>>> would assume that latter, otherwise we'd either violate the
>>>>> spec (one thread serves the whole request) or spend a lot
>>>>> of resources making sure we got the same thread back, etc.
>>>> 
>>>> Both.
>>>> 
>>>> The socket gets added to the BlockPoller and the thread waits
>>>> on a latch for the BlockPoller to data can be read.
>> 
>>> Okay, but it's still one-thread-one-request... /The/ thread
>>> will stay with that request until its complete, right? The
>>> BlockPoller will just wake-up the same waiting thread.. no
>>> funny-business? ;)
>> 
>> Correct.
>> 
>>> Okay, one more related question: for the BIO connector, does
>>> the request/connection go back into any kind of queue after
>>> the initial (keep-alive) request has completed, or does the
>>> thread that has already processed the first request on the
>>> connection keep going until there are no more keep-alive
>>> requests? I can't see a mechanism in the BIO connector to
>>> ensure any kind of fairness with respect to request priority:
>>> once the client is in, it can make as many requests as it wants
>>> (up to maxKeepAliveRequests) without getting back in line.
>> 
>> Correct. Although keep in mind that for BIO it doesn't make sense
>> to have connections > threads so it really comes down to how the
>> threads are scheduled for processing.
> 
> Understood, but there are say 1000 connections waiting in the
> accept queue and only 250 threads available, if my connection gets
> accept()ed, then I get to make as many requests as I want without
> having to get back in line. Yes, I ave to compete for CPU time with
> the other 249 threads, but I don't have to wait in the
> 1000-connection-long line.

I knew something was bugging me about this.

You need to look at the end of the while loop in
AbstractHttp11Processor.process() and the call to breakKeepAliveLoop()

What happens is that if there is no evidence of a pipelined request at
that point, the socket goes back into the socket/processor map and the
thread is used to process another socket so you can end up with more
concurrent connections than threads but only if you explicitly set
maxConnections > maxThreads which I would maintain is a bad idea for
BIO anyway as you can end up with some threads waiting huge amounts of
time to be processed.

Given that this feature offers little/no benefit at the price of
having to run through a whole pile of code only to end up back where
you started, I'm tempted to hard-code the return value of
breakKeepAliveLoop() to false for BIO HTTP.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTMGaJAAoJEBDAHFovYFnn0lQP/A4TyL3Xqp/dd4nYJxtP1lXT
omQfbVHYI61Qb1DZDLxjRmM4/9Qs1YUEImmyJLtG1YE7XqeiJhp7bcg4K8BOXKP1
V2Di9cqiRo4mFxmOSsk/86Gad0lnRafc+MetepOATpaDrSTYlCrkGpyjuNKHfbai
nILsSiUGV1qlG/XPteJUrG5SwyphdUyKA2HpnPnMsYG5p4aO2Gj8e3tpF1eoKXSK
IX1PEVxY5ur2UyZrX7Gz4ulz7DKtJN/w7r2iscR3ItxGgl3K6bBcWd6EaUKraCKW
iBsPbFxzQe2AH0iPil6P+HCMenDpsc8D246FrIfL492hYcN8Zcui0EfwmpAcxFg9
M2yVS0X97vjo/L62OuQlj8WXOvCILlaeyh1zW8cjuz2ABw/loczc0WBZFVl7vkJe
me58M38Eo0/jMZ8SFy+t9OREUXPY721l0+/I8h0ded57lsgrXXxTIdB8kT0YV2Ru
XIaPrZafUg7rq413UC0lcSj6mhLwMtS/rusHwDY/RMLsx/1Wvyr1N4K0knDl16iy
PMB5sEEKd/VmW4a1f9ZxBvb9/TmY/cPZxQ1p/hNi8QTkRyTDwA8bta+KKsjfG/Du
drNDweML7AcI1X14PTqWgG/kNGVA+0YLvcgPeZPS021HTETzGAzcn93jT1xG15dU
06RFVeURXSNQsuMpWANR
=Qv1g
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/24/14, 5:37 AM, Mark Thomas wrote:
> On 24/03/2014 00:50, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>> Mark,
>>>>>
>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>> All,
>>>>>>
>>>>>>> I'm looking at the comparison table at the bottom of the 
>>>>>>> HTTP connectors page, and I have a few questions about
>>>>>>> it.
>>>>>>
>>>>>>> First, what does "Polling size" mean?
>>>>>>
>>>>>> Maximum number of connections in the poller. I'd simply
>>>>>> remove it from the table. It doesn't add anything.
>>>>>
>>>>> Okay, thanks.
>>>>>
>>>>>>> Second, under the NIO connector, both "Read HTTP Body"
>>>>>>> and "Write HTTP Response" say that they are
>>>>>>> "sim-Blocking"... does that mean that the API itself is
>>>>>>> stream-based (i.e. blocking) but that the actual
>>>>>>> under-the-covers behavior is to use non-blocking I/O?
>>>>>>
>>>>>> It means simulated blocking. The low level writes use a 
>>>>>> non-blocking API but blocking is simulated by not returning
>>>>>> to the caller until the write completes.
>>>>>
>>>>> That's what I was thinking. Thanks for confirming.
>>>
>>>> Another quick question: during the sim-blocking for reading the
>>>>  request-body, does the request go back into the poller queue,
>>>> or does it just sit waiting single-threaded-style? I would
>>>> assume that latter, otherwise we'd either violate the spec (one
>>>> thread serves the whole request) or spend a lot of resources
>>>> making sure we got the same thread back, etc.
>>>
>>> Both.
>>>
>>> The socket gets added to the BlockPoller and the thread waits on
>>> a latch for the BlockPoller to data can be read.
> 
>> Okay, but it's still one-thread-one-request... /The/ thread will
>> stay with that request until its complete, right? The BlockPoller
>> will just wake-up the same waiting thread.. no funny-business? ;)
> 
> Correct.
> 
>> Okay, one more related question: for the BIO connector, does the 
>> request/connection go back into any kind of queue after the
>> initial (keep-alive) request has completed, or does the thread that
>> has already processed the first request on the connection keep
>> going until there are no more keep-alive requests? I can't see a
>> mechanism in the BIO connector to ensure any kind of fairness with
>> respect to request priority: once the client is in, it can make as
>> many requests as it wants (up to maxKeepAliveRequests) without
>> getting back in line.
> 
> Correct. Although keep in mind that for BIO it doesn't make sense to
> have connections > threads so it really comes down to how the threads
> are scheduled for processing.

Understood, but there are say 1000 connections waiting in the accept
queue and only 250 threads available, if my connection gets accept()ed,
then I get to make as many requests as I want without having to get back
in line. Yes, I ave to compete for CPU time with the other 249 threads,
but I don't have to wait in the 1000-connection-long line.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 24/03/2014 00:50, Christopher Schultz wrote:
> Mark,
> 
> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>> Mark,
>> 
>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>> Mark,
>>>> 
>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>> All,
>>>>> 
>>>>>> I'm looking at the comparison table at the bottom of the 
>>>>>> HTTP connectors page, and I have a few questions about
>>>>>> it.
>>>>> 
>>>>>> First, what does "Polling size" mean?
>>>>> 
>>>>> Maximum number of connections in the poller. I'd simply
>>>>> remove it from the table. It doesn't add anything.
>>>> 
>>>> Okay, thanks.
>>>> 
>>>>>> Second, under the NIO connector, both "Read HTTP Body"
>>>>>> and "Write HTTP Response" say that they are
>>>>>> "sim-Blocking"... does that mean that the API itself is
>>>>>> stream-based (i.e. blocking) but that the actual
>>>>>> under-the-covers behavior is to use non-blocking I/O?
>>>>> 
>>>>> It means simulated blocking. The low level writes use a 
>>>>> non-blocking API but blocking is simulated by not returning
>>>>> to the caller until the write completes.
>>>> 
>>>> That's what I was thinking. Thanks for confirming.
>> 
>>> Another quick question: during the sim-blocking for reading the
>>>  request-body, does the request go back into the poller queue,
>>> or does it just sit waiting single-threaded-style? I would
>>> assume that latter, otherwise we'd either violate the spec (one
>>> thread serves the whole request) or spend a lot of resources
>>> making sure we got the same thread back, etc.
>> 
>> Both.
>> 
>> The socket gets added to the BlockPoller and the thread waits on
>> a latch for the BlockPoller to data can be read.
> 
> Okay, but it's still one-thread-one-request... /The/ thread will
> stay with that request until its complete, right? The BlockPoller
> will just wake-up the same waiting thread.. no funny-business? ;)

Correct.

> Okay, one more related question: for the BIO connector, does the 
> request/connection go back into any kind of queue after the
> initial (keep-alive) request has completed, or does the thread that
> has already processed the first request on the connection keep
> going until there are no more keep-alive requests? I can't see a
> mechanism in the BIO connector to ensure any kind of fairness with
> respect to request priority: once the client is in, it can make as
> many requests as it wants (up to maxKeepAliveRequests) without
> getting back in line.

Correct. Although keep in mind that for BIO it doesn't make sense to
have connections > threads so it really comes down to how the threads
are scheduled for processing.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJTL/zwAAoJEBDAHFovYFnn7ZwQALSXf/WzpzXd1hj/TdfUCSlI
e7m6vMP0EdzTG5WV1GcnWb4I/votVJEENhr1ApB+kMc00qrnvOxu/YPMaNjkd7J+
CqajYOYEobuWt4UAqGSk9QyLq9bjKNyzG8jN+q2AY3mVjjM019RzQhP2Wf3AdjOW
v+Nu9j+A32vay/UcutEzxGvVEtmHTqW70B9o+43SqPuplJLzb6rGooq8JICsDn5g
agTUynLqZEgxHyJ5d7b+ZnXcsFRcchfyZqNCDOCo7ULqS6y9jaqUZSrq8hDvOjMi
6LNH/mk6QVPuii3j0wZ8kmJFgK6Tb1DID6+gx7Xw8CHfmxi0P4Cf6L87CYMFo7AO
dRB1IE5WeuRjxXlGS197NZ+l+fBQe24UNFw+Z0Uy38yqpIFjzvdxsZXihJGT6j2+
m4d01GJc4vbZR9le8BJuVLrb5rT7Dmk2Tg0nJmOHMmoGk/yioJ2/2pR+HqNAr9Uw
cq1+qvS+773rGNm1z4ULcV0S5cpWikUIoQa+v17kfBDVzJiCY1HGJfJM29kLp8z+
M4KnyeACRcPu0RUZqV6DStd6am6uRZ3l3nQFRyBTKdW8lsSwjx3XOBQGC5k0yNZ7
z6O1mdFQH1+4i6hfWoTqPsjq85V/+BxEwNdXYNJBF0OSgAqOTHRKpxgIy3TIi4M2
AyXj6QGYgkXTnCKNTynL
=O1Wm
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: Connectors, blocking, and keepalive

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Mark,

On 3/23/14, 6:12 PM, Mark Thomas wrote:
> On 23/03/2014 22:07, Christopher Schultz wrote:
>> Mark,
> 
>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>> Mark,
>>>
>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>> All,
>>>>
>>>>> I'm looking at the comparison table at the bottom of the
>>>>> HTTP connectors page, and I have a few questions about it.
>>>>
>>>>> First, what does "Polling size" mean?
>>>>
>>>> Maximum number of connections in the poller. I'd simply remove
>>>> it from the table. It doesn't add anything.
>>>
>>> Okay, thanks.
>>>
>>>>> Second, under the NIO connector, both "Read HTTP Body" and
>>>>> "Write HTTP Response" say that they are "sim-Blocking"...
>>>>> does that mean that the API itself is stream-based (i.e.
>>>>> blocking) but that the actual under-the-covers behavior is to
>>>>> use non-blocking I/O?
>>>>
>>>> It means simulated blocking. The low level writes use a
>>>> non-blocking API but blocking is simulated by not returning to
>>>> the caller until the write completes.
>>>
>>> That's what I was thinking. Thanks for confirming.
> 
>> Another quick question: during the sim-blocking for reading the 
>> request-body, does the request go back into the poller queue, or
>> does it just sit waiting single-threaded-style? I would assume that
>> latter, otherwise we'd either violate the spec (one thread serves
>> the whole request) or spend a lot of resources making sure we got
>> the same thread back, etc.
> 
> Both.
> 
> The socket gets added to the BlockPoller and the thread waits on a
> latch for the BlockPoller to data can be read.

Okay, but it's still one-thread-one-request... /The/ thread will stay
with that request until its complete, right? The BlockPoller will just
wake-up the same waiting thread.. no funny-business? ;)

Okay, one more related question: for the BIO connector, does the
request/connection go back into any kind of queue after the initial
(keep-alive) request has completed, or does the thread that has already
processed the first request on the connection keep going until there are
no more keep-alive requests? I can't see a mechanism in the BIO
connector to ensure any kind of fairness with respect to request
priority: once the client is in, it can make as many requests as it
wants (up to maxKeepAliveRequests) without getting back in line.

Thanks,
-chris


Re: Connectors, blocking, and keepalive

Posted by Mark Thomas <ma...@apache.org>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 23/03/2014 22:07, Christopher Schultz wrote:
> Mark,
> 
> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>> Mark,
>> 
>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>> All,
>>> 
>>>> I'm looking at the comparison table at the bottom of the
>>>> HTTP connectors page, and I have a few questions about it.
>>> 
>>>> First, what does "Polling size" mean?
>>> 
>>> Maximum number of connections in the poller. I'd simply remove
>>> it from the table. It doesn't add anything.
>> 
>> Okay, thanks.
>> 
>>>> Second, under the NIO connector, both "Read HTTP Body" and
>>>> "Write HTTP Response" say that they are "sim-Blocking"...
>>>> does that mean that the API itself is stream-based (i.e.
>>>> blocking) but that the actual under-the-covers behavior is to
>>>> use non-blocking I/O?
>>> 
>>> It means simulated blocking. The low level writes use a
>>> non-blocking API but blocking is simulated by not returning to
>>> the caller until the write completes.
>> 
>> That's what I was thinking. Thanks for confirming.
> 
> Another quick question: during the sim-blocking for reading the 
> request-body, does the request go back into the poller queue, or
> does it just sit waiting single-threaded-style? I would assume that
> latter, otherwise we'd either violate the spec (one thread serves
> the whole request) or spend a lot of resources making sure we got
> the same thread back, etc.

Both.

The socket gets added to the BlockPoller and the thread waits on a
latch for the BlockPoller to data can be read.

Mark

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIbBAEBAgAGBQJTL1xiAAoJEBDAHFovYFnnJGcP9jBIyVGXlYL8pSVMzMNvf1dd
t6d66bajcWMTINmnCcXOzowdnBpDQHyIPKaS0U7RjmBpbOGrK0r+rfVBqkFNPcPR
J9ivXJeZHHgRFVHFyfanBKUwWGGYcFKQuLBfd9vzai2bAyX3/Le0NvZc0/c+/PAA
FPJPDVOUNtN57GKUa+VWJ0Hm7U9YH1VufcvNp/ULNnzkeeg0pnpa8aXroxdtMqw2
j65K3C9O8EQyYU3AzcVMlaxmP+0bGyhCBK3gWb/ZXAh2+0E/14zrBKVqNnRjxo8c
zAPjN79BY+xQ6Un4gEb/XInPFekUlh+IQRSQy7IZ9gmHAmfF/HQ73fEMyS5D6QJ4
Ezs8+K56QniZLE2funSvHX3VWCUyqh/lCYMi0u8RuZw7xOrwsKVK37pmPpk8xDAc
jWcKASOaA4nLDOypb8ys7KNhZSMWLxwcIyLTT8Ck7BDX4PWrE3bPly2cJ2GAkd4v
slRLMuoddMziKgG0dJyi4lpMkR4FQPU1NVS8d+ohoUccfbYSVNM3cLPCOeVJjdeC
ywvhVgKvUItESvuOuhTdyx/sYjA6UJ9bWl1esYh6CVBFQqpnTIsK499ORqJGcosI
N6l2XBIiRhvW3EuF1moppYXX6rUtCz8m+9MWmlpiB6TSU6bI9fu48xFx0JvoN+dD
jruU5ZNKVlRYAYbIh+Y=
=21mp
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org