You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hc.apache.org by Oleg Kalnichevski <ol...@apache.org> on 2005/08/19 23:17:47 UTC

[HttpCommon] The trouble with NIO is not about performance

Folks,

I think we (and especially I) have been looking at the problem from a
wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
the numbers below). This is especially the case for small requests /
responses where the message content is only a coupe of times larger than
the message head. NIO _DOES_ help significantly speed up parsing HTTP
message headers

tests.performance.PerformanceTest 8080 200 OldIO
================================================
Request: GET /tomcat-docs/changelog.html HTTP/1.1
Average (nanosec): 10,109,390
Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 4,262,260
Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 7,813,805

tests.performance.PerformanceTest 8080 200 NIO
================================================
Request: GET /tomcat-docs/changelog.html HTTP/1.1
Average (nanosec): 8,681,050
Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 1,993,590
Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 6,062,200

The performance of the NIO starts degrading dramatically only when
socket channels is unblocked and is registered with a selector. The sole
reason we need to use selectors is to implement read socket timeout. To
make matters worse we are forced to use one selector per channel only to
simulate blocking I/O. This is extremely wasteful. NIO is not meant to
be used this way.  

Fundamentally the whole issue is about troubles timing out idle NIO
connections, not about NIO performance. What if we just decided to NOT
support socket timeouts on NIO connections? Consider this. On the client
side we could easily work the problem around by choosing the type of the
connection depending upon the value of the SO_TIMEOUT parameter.
Besides, there are enough client side applications where socket read
timeout is less important total the request time, which require a
monitor thread anyway. This kind of applications could benefit greatly
from NIO connections without losing a bit of functionality. The server
side is by far more problematic because on the server side socket read
timeout is a convenient way to manage idle connections. However, an
extra thread to monitor and drop idle connections may well be worth the
extra performance of NIO.

What do you think?

Oleg


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Sam Berlin <sb...@gmail.com>.
I should also mention that there should also be a way to use a single
Selector for all the channels -- not doing so, as you found out, is
extremely taxing to the point of being useless.

Thanks,
 Sam

On 8/20/05, Sam Berlin <sb...@gmail.com> wrote:
> It is possible to use timeouts when using NIO, however you have to add
> the behaviour in (and the timing will never be exact).  Essentially,
> you just need to maintain a secondary object per selection-attachment
> that keeps track of the timeout required for that operation, and do
> short timed-out selects.  If the current time after the select
> finishes exceeds the time allotted for an event, that SelectionKey is
> cancelled and the associated channels are closed.  This is much easier
> to do with connects, because it's a one-time behaviour -- but it is
> possible to do with reads (and with writes it's also possible, even
> though there never was a parameter for timing out on writes with
> blocking streams).
> 
> Thanks,
>  Sam
> 
> On 8/20/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > On Fri, 2005-08-19 at 22:10 -0400, Michael Becke wrote:
> > > Is using selectors the only way to support read timeout?
> >
> > The only one I know of and I have been working with NIO for quite some
> > time
> >
> > > We certainly
> > > could choose which factory to use based up SO_TIMEOUT, but it seems
> > > like a bit of a hack.  There must be a better way.  Would it be
> > > possible to use blocking NIO and the old method for handling
> > > SO_TIMEOUT and still see some of the performance benefits of NIO?
> > >
> >
> > Not that I know of. This is what the javadocs say:
> > "...Enable/disable SO_TIMEOUT with the specified timeout, in
> > milliseconds. With this option set to a non-zero timeout, a read() call
> > on the InputStream associated with this Socket will block for only this
> > amount of time..."
> >
> > SO_TIMEOUT will have effect on
> > channel.socket().getInputStream().read(stuff);
> >
> > SO_TIMEOUT will have NO effect on
> > channel.read(stuff);
> >
> > There are enough people who have been complaining loudly about it,
> > because this pretty much renders blocking NIO useless.
> >
> > Oleg
> >
> > > Mike
> > >
> > > On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > > > Folks,
> > > >
> > > > I think we (and especially I) have been looking at the problem from a
> > > > wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> > > > the numbers below). This is especially the case for small requests /
> > > > responses where the message content is only a coupe of times larger than
> > > > the message head. NIO _DOES_ help significantly speed up parsing HTTP
> > > > message headers
> > > >
> > > > tests.performance.PerformanceTest 8080 200 OldIO
> > > > ================================================
> > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > Average (nanosec): 10,109,390
> > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 4,262,260
> > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 7,813,805
> > > >
> > > > tests.performance.PerformanceTest 8080 200 NIO
> > > > ================================================
> > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > Average (nanosec): 8,681,050
> > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 1,993,590
> > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 6,062,200
> > > >
> > > > The performance of the NIO starts degrading dramatically only when
> > > > socket channels is unblocked and is registered with a selector. The sole
> > > > reason we need to use selectors is to implement read socket timeout. To
> > > > make matters worse we are forced to use one selector per channel only to
> > > > simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> > > > be used this way.
> > > >
> > > > Fundamentally the whole issue is about troubles timing out idle NIO
> > > > connections, not about NIO performance. What if we just decided to NOT
> > > > support socket timeouts on NIO connections? Consider this. On the client
> > > > side we could easily work the problem around by choosing the type of the
> > > > connection depending upon the value of the SO_TIMEOUT parameter.
> > > > Besides, there are enough client side applications where socket read
> > > > timeout is less important total the request time, which require a
> > > > monitor thread anyway. This kind of applications could benefit greatly
> > > > from NIO connections without losing a bit of functionality. The server
> > > > side is by far more problematic because on the server side socket read
> > > > timeout is a convenient way to manage idle connections. However, an
> > > > extra thread to monitor and drop idle connections may well be worth the
> > > > extra performance of NIO.
> > > >
> > > > What do you think?
> > > >
> > > > Oleg
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> >
> >
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Oleg Kalnichevski <ol...@apache.org>.
Odi,

I am not much of a writer as you know, but I'll try to find time to
summarize the findings and put then on the Wiki

A helping hand would be very much appreciated, though ;-)

Oleg


On Mon, Aug 22, 2005 at 09:36:58AM +0200, Ortwin Gl?ck wrote:
> I think what we are currently finding out about NIO and old IO should be 
> documented on a Wiki page somewhere. It may be a good reference for 
> anyone who wants to really optimize the use of HttpClient for their use 
> pattern.
> 
> Odi
> 
> Sam Berlin wrote:
> >Hi Oleg,
> >
> >Sorry I cannot read/reply more frequently -- I'm in the middle of a
> >trip across the states.
> >
> >It may be a pit premature to say that NIO will always be inferior to
> >classic I/O under a thousand connections.  The result entirely depends
> >on type of machine the program is running on.  On beefed-up servers
> >that aren't doing much else, classic I/O will always be easier.  On
> >normal consumer boxes, non-blocking I/O becomes a better choice almost
> >immediately.
> >
> >I don't have the capability to run your sample code right now, but I
> >never noticed such a drastic performance decrease when converting
> >classic I/O to non-blocking code before.  In fact, the conversion
> >significantly increased performance because less threads were waiting
> >on locks and there was much less context-switches.
> >
> >Tomcat is pretty much designed to always be acting on a
> >server-machine, but HttpClient has the potential to work within any
> >application, some of which may be designed for casual users.  It would
> >be worthwhile, for those applications, to be able to use HttpClient
> >with a non-blocking engine.
> >
> >Thanks,
> > Sam
> 
> 
> -- 
> [web]  http://www.odi.ch/
> [blog] http://www.odi.ch/weblog/
> [pgp]  key 0x81CF3416
>        finger print F2B1 B21F F056 D53E 5D79  A5AF 02BE 70F5 81CF 3416
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Ortwin Glück <od...@odi.ch>.
I think what we are currently finding out about NIO and old IO should be 
documented on a Wiki page somewhere. It may be a good reference for 
anyone who wants to really optimize the use of HttpClient for their use 
pattern.

Odi

Sam Berlin wrote:
> Hi Oleg,
> 
> Sorry I cannot read/reply more frequently -- I'm in the middle of a
> trip across the states.
> 
> It may be a pit premature to say that NIO will always be inferior to
> classic I/O under a thousand connections.  The result entirely depends
> on type of machine the program is running on.  On beefed-up servers
> that aren't doing much else, classic I/O will always be easier.  On
> normal consumer boxes, non-blocking I/O becomes a better choice almost
> immediately.
> 
> I don't have the capability to run your sample code right now, but I
> never noticed such a drastic performance decrease when converting
> classic I/O to non-blocking code before.  In fact, the conversion
> significantly increased performance because less threads were waiting
> on locks and there was much less context-switches.
> 
> Tomcat is pretty much designed to always be acting on a
> server-machine, but HttpClient has the potential to work within any
> application, some of which may be designed for casual users.  It would
> be worthwhile, for those applications, to be able to use HttpClient
> with a non-blocking engine.
> 
> Thanks,
>  Sam


-- 
[web]  http://www.odi.ch/
[blog] http://www.odi.ch/weblog/
[pgp]  key 0x81CF3416
        finger print F2B1 B21F F056 D53E 5D79  A5AF 02BE 70F5 81CF 3416

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Sam Berlin <sb...@gmail.com>.
Hi Oleg,

Sorry I cannot read/reply more frequently -- I'm in the middle of a
trip across the states.

It may be a pit premature to say that NIO will always be inferior to
classic I/O under a thousand connections.  The result entirely depends
on type of machine the program is running on.  On beefed-up servers
that aren't doing much else, classic I/O will always be easier.  On
normal consumer boxes, non-blocking I/O becomes a better choice almost
immediately.

I don't have the capability to run your sample code right now, but I
never noticed such a drastic performance decrease when converting
classic I/O to non-blocking code before.  In fact, the conversion
significantly increased performance because less threads were waiting
on locks and there was much less context-switches.

Tomcat is pretty much designed to always be acting on a
server-machine, but HttpClient has the potential to work within any
application, some of which may be designed for casual users.  It would
be worthwhile, for those applications, to be able to use HttpClient
with a non-blocking engine.

Thanks,
 Sam


On 8/20/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Sat, 2005-08-20 at 09:09 -0500, Sam Berlin wrote:
> > It is possible to use timeouts when using NIO, however you have to add
> > the behaviour in (and the timing will never be exact).  Essentially,
> > you just need to maintain a secondary object per selection-attachment
> > that keeps track of the timeout required for that operation, and do
> > short timed-out selects.
> 
> Sam,
> 
> The trouble is that as soon as the channel is put into a non-blocking
> mode and selectors get involved, the raw I/O throughput takes a
> significant hit (10-20% on reads, 50-100% on writes). Using selectors
> only to calculate timeouts on what is essentially a blocking connection
> simply does not make sense in terms of performance no matter whether one
> selector per multiple channels or one selector per channel are being
> used. Non-blocking NIO starts paying off only when costs of switching
> between several thousand mostly idle connection threads starts exceeding
> the performance penalty caused by the use of selectors
> 
> Basically I lean toward concluding that NIO proved inferior to old I/O
> in all but a few special cases. Only when the number of concurrent
> connections does exceed a thousand, I would consider using NIO. This the
> reason why the Tomcat team has repeatedly rejected proposals to port
> Tomcat HTTP connector to NIO. A pool of worker threads acting on a queue
> of relatively short lived connections will always outperform one
> thread / many channels model as long as the number of concurrent
> connections does not reach a thousand.
> 
> For what it is worth.
> 
> Cheers,
> 
> Oleg
> 
> >   If the current time after the select
> > finishes exceeds the time allotted for an event, that SelectionKey is
> > cancelled and the associated channels are closed.  This is much easier
> > to do with connects, because it's a one-time behaviour -- but it is
> > possible to do with reads (and with writes it's also possible, even
> > though there never was a parameter for timing out on writes with
> > blocking streams).
> >
> > Thanks,
> >  Sam
> >
> > On 8/20/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > > On Fri, 2005-08-19 at 22:10 -0400, Michael Becke wrote:
> > > > Is using selectors the only way to support read timeout?
> > >
> > > The only one I know of and I have been working with NIO for quite some
> > > time
> > >
> > > > We certainly
> > > > could choose which factory to use based up SO_TIMEOUT, but it seems
> > > > like a bit of a hack.  There must be a better way.  Would it be
> > > > possible to use blocking NIO and the old method for handling
> > > > SO_TIMEOUT and still see some of the performance benefits of NIO?
> > > >
> > >
> > > Not that I know of. This is what the javadocs say:
> > > "...Enable/disable SO_TIMEOUT with the specified timeout, in
> > > milliseconds. With this option set to a non-zero timeout, a read() call
> > > on the InputStream associated with this Socket will block for only this
> > > amount of time..."
> > >
> > > SO_TIMEOUT will have effect on
> > > channel.socket().getInputStream().read(stuff);
> > >
> > > SO_TIMEOUT will have NO effect on
> > > channel.read(stuff);
> > >
> > > There are enough people who have been complaining loudly about it,
> > > because this pretty much renders blocking NIO useless.
> > >
> > > Oleg
> > >
> > > > Mike
> > > >
> > > > On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > > > > Folks,
> > > > >
> > > > > I think we (and especially I) have been looking at the problem from a
> > > > > wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> > > > > the numbers below). This is especially the case for small requests /
> > > > > responses where the message content is only a coupe of times larger than
> > > > > the message head. NIO _DOES_ help significantly speed up parsing HTTP
> > > > > message headers
> > > > >
> > > > > tests.performance.PerformanceTest 8080 200 OldIO
> > > > > ================================================
> > > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > > Average (nanosec): 10,109,390
> > > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > > Average (nanosec): 4,262,260
> > > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > > Average (nanosec): 7,813,805
> > > > >
> > > > > tests.performance.PerformanceTest 8080 200 NIO
> > > > > ================================================
> > > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > > Average (nanosec): 8,681,050
> > > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > > Average (nanosec): 1,993,590
> > > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > > Average (nanosec): 6,062,200
> > > > >
> > > > > The performance of the NIO starts degrading dramatically only when
> > > > > socket channels is unblocked and is registered with a selector. The sole
> > > > > reason we need to use selectors is to implement read socket timeout. To
> > > > > make matters worse we are forced to use one selector per channel only to
> > > > > simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> > > > > be used this way.
> > > > >
> > > > > Fundamentally the whole issue is about troubles timing out idle NIO
> > > > > connections, not about NIO performance. What if we just decided to NOT
> > > > > support socket timeouts on NIO connections? Consider this. On the client
> > > > > side we could easily work the problem around by choosing the type of the
> > > > > connection depending upon the value of the SO_TIMEOUT parameter.
> > > > > Besides, there are enough client side applications where socket read
> > > > > timeout is less important total the request time, which require a
> > > > > monitor thread anyway. This kind of applications could benefit greatly
> > > > > from NIO connections without losing a bit of functionality. The server
> > > > > side is by far more problematic because on the server side socket read
> > > > > timeout is a convenient way to manage idle connections. However, an
> > > > > extra thread to monitor and drop idle connections may well be worth the
> > > > > extra performance of NIO.
> > > > >
> > > > > What do you think?
> > > > >
> > > > > Oleg
> > > > >
> > > > >
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > > > >
> > > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > > >
> > > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> >
> >
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sat, 2005-08-20 at 09:09 -0500, Sam Berlin wrote:
> It is possible to use timeouts when using NIO, however you have to add
> the behaviour in (and the timing will never be exact).  Essentially,
> you just need to maintain a secondary object per selection-attachment
> that keeps track of the timeout required for that operation, and do
> short timed-out selects.

Sam, 

The trouble is that as soon as the channel is put into a non-blocking
mode and selectors get involved, the raw I/O throughput takes a
significant hit (10-20% on reads, 50-100% on writes). Using selectors
only to calculate timeouts on what is essentially a blocking connection
simply does not make sense in terms of performance no matter whether one
selector per multiple channels or one selector per channel are being
used. Non-blocking NIO starts paying off only when costs of switching
between several thousand mostly idle connection threads starts exceeding
the performance penalty caused by the use of selectors

Basically I lean toward concluding that NIO proved inferior to old I/O
in all but a few special cases. Only when the number of concurrent
connections does exceed a thousand, I would consider using NIO. This the
reason why the Tomcat team has repeatedly rejected proposals to port
Tomcat HTTP connector to NIO. A pool of worker threads acting on a queue
of relatively short lived connections will always outperform one
thread / many channels model as long as the number of concurrent
connections does not reach a thousand.

For what it is worth.

Cheers,

Oleg

>   If the current time after the select
> finishes exceeds the time allotted for an event, that SelectionKey is
> cancelled and the associated channels are closed.  This is much easier
> to do with connects, because it's a one-time behaviour -- but it is
> possible to do with reads (and with writes it's also possible, even
> though there never was a parameter for timing out on writes with
> blocking streams).
> 
> Thanks,
>  Sam
> 
> On 8/20/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > On Fri, 2005-08-19 at 22:10 -0400, Michael Becke wrote:
> > > Is using selectors the only way to support read timeout?
> > 
> > The only one I know of and I have been working with NIO for quite some
> > time
> > 
> > > We certainly
> > > could choose which factory to use based up SO_TIMEOUT, but it seems
> > > like a bit of a hack.  There must be a better way.  Would it be
> > > possible to use blocking NIO and the old method for handling
> > > SO_TIMEOUT and still see some of the performance benefits of NIO?
> > >
> > 
> > Not that I know of. This is what the javadocs say:
> > "...Enable/disable SO_TIMEOUT with the specified timeout, in
> > milliseconds. With this option set to a non-zero timeout, a read() call
> > on the InputStream associated with this Socket will block for only this
> > amount of time..."
> > 
> > SO_TIMEOUT will have effect on
> > channel.socket().getInputStream().read(stuff);
> > 
> > SO_TIMEOUT will have NO effect on
> > channel.read(stuff);
> > 
> > There are enough people who have been complaining loudly about it,
> > because this pretty much renders blocking NIO useless.
> > 
> > Oleg
> > 
> > > Mike
> > >
> > > On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > > > Folks,
> > > >
> > > > I think we (and especially I) have been looking at the problem from a
> > > > wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> > > > the numbers below). This is especially the case for small requests /
> > > > responses where the message content is only a coupe of times larger than
> > > > the message head. NIO _DOES_ help significantly speed up parsing HTTP
> > > > message headers
> > > >
> > > > tests.performance.PerformanceTest 8080 200 OldIO
> > > > ================================================
> > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > Average (nanosec): 10,109,390
> > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 4,262,260
> > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 7,813,805
> > > >
> > > > tests.performance.PerformanceTest 8080 200 NIO
> > > > ================================================
> > > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > > Average (nanosec): 8,681,050
> > > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 1,993,590
> > > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > > Average (nanosec): 6,062,200
> > > >
> > > > The performance of the NIO starts degrading dramatically only when
> > > > socket channels is unblocked and is registered with a selector. The sole
> > > > reason we need to use selectors is to implement read socket timeout. To
> > > > make matters worse we are forced to use one selector per channel only to
> > > > simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> > > > be used this way.
> > > >
> > > > Fundamentally the whole issue is about troubles timing out idle NIO
> > > > connections, not about NIO performance. What if we just decided to NOT
> > > > support socket timeouts on NIO connections? Consider this. On the client
> > > > side we could easily work the problem around by choosing the type of the
> > > > connection depending upon the value of the SO_TIMEOUT parameter.
> > > > Besides, there are enough client side applications where socket read
> > > > timeout is less important total the request time, which require a
> > > > monitor thread anyway. This kind of applications could benefit greatly
> > > > from NIO connections without losing a bit of functionality. The server
> > > > side is by far more problematic because on the server side socket read
> > > > timeout is a convenient way to manage idle connections. However, an
> > > > extra thread to monitor and drop idle connections may well be worth the
> > > > extra performance of NIO.
> > > >
> > > > What do you think?
> > > >
> > > > Oleg
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > >
> > >
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > 
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Sam Berlin <sb...@gmail.com>.
It is possible to use timeouts when using NIO, however you have to add
the behaviour in (and the timing will never be exact).  Essentially,
you just need to maintain a secondary object per selection-attachment
that keeps track of the timeout required for that operation, and do
short timed-out selects.  If the current time after the select
finishes exceeds the time allotted for an event, that SelectionKey is
cancelled and the associated channels are closed.  This is much easier
to do with connects, because it's a one-time behaviour -- but it is
possible to do with reads (and with writes it's also possible, even
though there never was a parameter for timing out on writes with
blocking streams).

Thanks,
 Sam

On 8/20/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Fri, 2005-08-19 at 22:10 -0400, Michael Becke wrote:
> > Is using selectors the only way to support read timeout?
> 
> The only one I know of and I have been working with NIO for quite some
> time
> 
> > We certainly
> > could choose which factory to use based up SO_TIMEOUT, but it seems
> > like a bit of a hack.  There must be a better way.  Would it be
> > possible to use blocking NIO and the old method for handling
> > SO_TIMEOUT and still see some of the performance benefits of NIO?
> >
> 
> Not that I know of. This is what the javadocs say:
> "...Enable/disable SO_TIMEOUT with the specified timeout, in
> milliseconds. With this option set to a non-zero timeout, a read() call
> on the InputStream associated with this Socket will block for only this
> amount of time..."
> 
> SO_TIMEOUT will have effect on
> channel.socket().getInputStream().read(stuff);
> 
> SO_TIMEOUT will have NO effect on
> channel.read(stuff);
> 
> There are enough people who have been complaining loudly about it,
> because this pretty much renders blocking NIO useless.
> 
> Oleg
> 
> > Mike
> >
> > On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > > Folks,
> > >
> > > I think we (and especially I) have been looking at the problem from a
> > > wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> > > the numbers below). This is especially the case for small requests /
> > > responses where the message content is only a coupe of times larger than
> > > the message head. NIO _DOES_ help significantly speed up parsing HTTP
> > > message headers
> > >
> > > tests.performance.PerformanceTest 8080 200 OldIO
> > > ================================================
> > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > Average (nanosec): 10,109,390
> > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 4,262,260
> > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 7,813,805
> > >
> > > tests.performance.PerformanceTest 8080 200 NIO
> > > ================================================
> > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > Average (nanosec): 8,681,050
> > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 1,993,590
> > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 6,062,200
> > >
> > > The performance of the NIO starts degrading dramatically only when
> > > socket channels is unblocked and is registered with a selector. The sole
> > > reason we need to use selectors is to implement read socket timeout. To
> > > make matters worse we are forced to use one selector per channel only to
> > > simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> > > be used this way.
> > >
> > > Fundamentally the whole issue is about troubles timing out idle NIO
> > > connections, not about NIO performance. What if we just decided to NOT
> > > support socket timeouts on NIO connections? Consider this. On the client
> > > side we could easily work the problem around by choosing the type of the
> > > connection depending upon the value of the SO_TIMEOUT parameter.
> > > Besides, there are enough client side applications where socket read
> > > timeout is less important total the request time, which require a
> > > monitor thread anyway. This kind of applications could benefit greatly
> > > from NIO connections without losing a bit of functionality. The server
> > > side is by far more problematic because on the server side socket read
> > > timeout is a convenient way to manage idle connections. However, an
> > > extra thread to monitor and drop idle connections may well be worth the
> > > extra performance of NIO.
> > >
> > > What do you think?
> > >
> > > Oleg
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> >
> >
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Fri, 2005-08-19 at 22:10 -0400, Michael Becke wrote:
> Is using selectors the only way to support read timeout?  

The only one I know of and I have been working with NIO for quite some
time

> We certainly
> could choose which factory to use based up SO_TIMEOUT, but it seems
> like a bit of a hack.  There must be a better way.  Would it be
> possible to use blocking NIO and the old method for handling
> SO_TIMEOUT and still see some of the performance benefits of NIO?
> 

Not that I know of. This is what the javadocs say:
"...Enable/disable SO_TIMEOUT with the specified timeout, in
milliseconds. With this option set to a non-zero timeout, a read() call
on the InputStream associated with this Socket will block for only this
amount of time..."

SO_TIMEOUT will have effect on
channel.socket().getInputStream().read(stuff);

SO_TIMEOUT will have NO effect on
channel.read(stuff);

There are enough people who have been complaining loudly about it,
because this pretty much renders blocking NIO useless.

Oleg

> Mike
> 
> On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> > Folks,
> > 
> > I think we (and especially I) have been looking at the problem from a
> > wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> > the numbers below). This is especially the case for small requests /
> > responses where the message content is only a coupe of times larger than
> > the message head. NIO _DOES_ help significantly speed up parsing HTTP
> > message headers
> > 
> > tests.performance.PerformanceTest 8080 200 OldIO
> > ================================================
> > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > Average (nanosec): 10,109,390
> > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > Average (nanosec): 4,262,260
> > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > Average (nanosec): 7,813,805
> > 
> > tests.performance.PerformanceTest 8080 200 NIO
> > ================================================
> > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > Average (nanosec): 8,681,050
> > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > Average (nanosec): 1,993,590
> > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > Average (nanosec): 6,062,200
> > 
> > The performance of the NIO starts degrading dramatically only when
> > socket channels is unblocked and is registered with a selector. The sole
> > reason we need to use selectors is to implement read socket timeout. To
> > make matters worse we are forced to use one selector per channel only to
> > simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> > be used this way.
> > 
> > Fundamentally the whole issue is about troubles timing out idle NIO
> > connections, not about NIO performance. What if we just decided to NOT
> > support socket timeouts on NIO connections? Consider this. On the client
> > side we could easily work the problem around by choosing the type of the
> > connection depending upon the value of the SO_TIMEOUT parameter.
> > Besides, there are enough client side applications where socket read
> > timeout is less important total the request time, which require a
> > monitor thread anyway. This kind of applications could benefit greatly
> > from NIO connections without losing a bit of functionality. The server
> > side is by far more problematic because on the server side socket read
> > timeout is a convenient way to manage idle connections. However, an
> > extra thread to monitor and drop idle connections may well be worth the
> > extra performance of NIO.
> > 
> > What do you think?
> > 
> > Oleg
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> > 
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org


Re: [HttpCommon] The trouble with NIO is not about performance

Posted by Michael Becke <mb...@gmail.com>.
Is using selectors the only way to support read timeout?  We certainly
could choose which factory to use based up SO_TIMEOUT, but it seems
like a bit of a hack.  There must be a better way.  Would it be
possible to use blocking NIO and the old method for handling
SO_TIMEOUT and still see some of the performance benefits of NIO?

Mike

On 8/19/05, Oleg Kalnichevski <ol...@apache.org> wrote:
> Folks,
> 
> I think we (and especially I) have been looking at the problem from a
> wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
> the numbers below). This is especially the case for small requests /
> responses where the message content is only a coupe of times larger than
> the message head. NIO _DOES_ help significantly speed up parsing HTTP
> message headers
> 
> tests.performance.PerformanceTest 8080 200 OldIO
> ================================================
> Request: GET /tomcat-docs/changelog.html HTTP/1.1
> Average (nanosec): 10,109,390
> Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> Average (nanosec): 4,262,260
> Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> Average (nanosec): 7,813,805
> 
> tests.performance.PerformanceTest 8080 200 NIO
> ================================================
> Request: GET /tomcat-docs/changelog.html HTTP/1.1
> Average (nanosec): 8,681,050
> Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> Average (nanosec): 1,993,590
> Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> Average (nanosec): 6,062,200
> 
> The performance of the NIO starts degrading dramatically only when
> socket channels is unblocked and is registered with a selector. The sole
> reason we need to use selectors is to implement read socket timeout. To
> make matters worse we are forced to use one selector per channel only to
> simulate blocking I/O. This is extremely wasteful. NIO is not meant to
> be used this way.
> 
> Fundamentally the whole issue is about troubles timing out idle NIO
> connections, not about NIO performance. What if we just decided to NOT
> support socket timeouts on NIO connections? Consider this. On the client
> side we could easily work the problem around by choosing the type of the
> connection depending upon the value of the SO_TIMEOUT parameter.
> Besides, there are enough client side applications where socket read
> timeout is less important total the request time, which require a
> monitor thread anyway. This kind of applications could benefit greatly
> from NIO connections without losing a bit of functionality. The server
> side is by far more problematic because on the server side socket read
> timeout is a convenient way to manage idle connections. However, an
> extra thread to monitor and drop idle connections may well be worth the
> extra performance of NIO.
> 
> What do you think?
> 
> Oleg
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: httpclient-dev-help@jakarta.apache.org