You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Filip Hanik - Dev Lists <de...@hanik.com> on 2006/06/23 01:06:09 UTC

NIO vs BIO speed

I've attached two test runs NIO vs BIO.
The results are very similar, in a regular scenario, blocking IO should 
be a little bit faster, cause it doesn't have to poll, and then hand off 
to a separate thread before reading.
Benefit of the NIO is of course that the number of open sockets are no 
longer limited by thread count.

Remy, can you run your tests again, are you still seeing a huge difference?
thanks
Filip


Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>>> That's unfortunate.  So regular is better?  What are they doing with 
>>> Grizzly?
>> no, its me :)
>> I need to come up with a non cpu sucking wait algorithm for the wait.
>>
>> once that is fixed, the performance should be close to identical
>
> For some reason, the performance went down further for me with that 
> patch (although I noticed the CPU is being used properly this time, 
> while before Tomcat was not using all the CPU available).
>
> Rémy
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>


-- 


Filip Hanik

Re: NIO vs BIO speed

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
https://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioEndpoint.java?view=log
https://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java?view=log
https://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java?view=log
https://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java?view=log
https://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java?view=log

Dakota Jack wrote:
> I am interested in this code.  Is there a way I can see the code?  Thanks.
>
>
>
> On 6/22/06, *Filip Hanik - Dev Lists* <devlists@hanik.com 
> <ma...@hanik.com>> wrote:
>
>     Remy Maucherat wrote:
>>     Filip Hanik - Dev Lists wrote:
>>>     Here is another test that I ran from a remote machine, setting
>>>     maxThreads="25" and ab concurrency to 50 and keepalive on.
>>>     In this case, NIO is a lot faster. Turn off keepalive on ab, and
>>>     we get similar results to previous run, where BIO is a tad faster.
>>
>>     Scaling the thread per connection model is done by increasing the
>>     amount of threads. This particular test demonstrates the obvious.
>     yes, what is interesting though, is my NIO connector is not really
>     true NIO, as it ties up a thread while polling for data. The true
>     implementation would have not invoked that thread yet, but for
>     that I would have had to rewritten the connector from scratch as I
>     couldn't have taken advantage of code already written and tested.
>     The reason it was done like this, is cause that way I could use
>     almost all the code from the APR connector.
>     So you could call it a semi-"thread-per-connection" model, yet
>     handles better than the true thread per connection model.
>     APR does the same thing.
>
>
>>
>>     If I understand the results correctly, the results would be
>>     acceptable on Unix.
>     yes, I think they are looking pretty good. And I am fairly
>     confident in this new code, as most of it, is old tested APR code.
>
>     Filip
>
>     -- 
>
>
>     Filip Hanik
>
>
>
>
> -- 
> "You can lead a horse to water but you cannot make it float on its back."
> ~Dakota Jack~
> ------------------------------------------------------------------------
>
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.1.394 / Virus Database: 268.9.2/372 - Release Date: 6/21/2006
>   


-- 


Filip Hanik

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by Dakota Jack <da...@gmail.com>.
I am interested in this code.  Is there a way I can see the code?  Thanks.



On 6/22/06, Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>
>  Remy Maucherat wrote:
>
> Filip Hanik - Dev Lists wrote:
>
> Here is another test that I ran from a remote machine, setting
> maxThreads="25" and ab concurrency to 50 and keepalive on.
> In this case, NIO is a lot faster. Turn off keepalive on ab, and we get
> similar results to previous run, where BIO is a tad faster.
>
>
> Scaling the thread per connection model is done by increasing the amount
> of threads. This particular test demonstrates the obvious.
>
> yes, what is interesting though, is my NIO connector is not really true
> NIO, as it ties up a thread while polling for data. The true implementation
> would have not invoked that thread yet, but for that I would have had to
> rewritten the connector from scratch as I couldn't have taken advantage of
> code already written and tested.
> The reason it was done like this, is cause that way I could use almost all
> the code from the APR connector.
> So you could call it a semi-"thread-per-connection" model, yet handles
> better than the true thread per connection model.
> APR does the same thing.
>
>
>
> If I understand the results correctly, the results would be acceptable on
> Unix.
>
> yes, I think they are looking pretty good. And I am fairly confident in
> this new code, as most of it, is old tested APR code.
>
> Filip
>
> --
>
>
> Filip Hanik
>



-- 
"You can lead a horse to water but you cannot make it float on its back."
~Dakota Jack~

Re: NIO vs BIO speed

Posted by Remy Maucherat <re...@apache.org>.
Bill Barker wrote:
> I haven't tested Filip's connector yet, but that's also my experience with
> the AJP/NIO connector:  NIO is pretty much useless on Windows. 

That's what I see, then, but it's not what Sun says.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


RE: NIO vs BIO speed

Posted by Bill Barker <wb...@wilshire.com>.
I haven't tested Filip's connector yet, but that's also my experience with
the AJP/NIO connector:  NIO is pretty much useless on Windows. 

> -----Original Message-----
> From: Remy Maucherat [mailto:remm@apache.org] 
> Sent: Monday, June 26, 2006 4:58 AM
> To: Tomcat Developers List
> Subject: Re: NIO vs BIO speed
> 
> Filip Hanik - Dev Lists wrote:
> > yes, I think they are looking pretty good. And I am fairly 
> confident in 
> > this new code, as most of it, is old tested APR code.
> 
> (Of course, there hasn't been any new changes, so it's not a 
> surprise it 
> not working any better for me; I did reboot in the meantime, 
> though, and 
> I am running JRE 1.5.0_06)
> 
> Performance with NIO is still bad for me (3 times slower than 
> the other 
> connectors) with /usr/sbin/ab.exe -n 5000 -c 20 -k 
> http://127.0.0.1:8081/tomcat.gif. Throughput seems to vary 
> wildly during 
> the test. With /usr/sbin/ab.exe -n 5000 -c 20 
> http://127.0.0.1:8081/tomcat.gif (no keepalive), performance is 
> horrible, and kills the poller thread after a few hundred iterations 
> with the exception I reported earlier:
> Exception in thread "http-8081-Poller-0" 
> java.lang.NullPointerException
>          at 
> sun.nio.ch.WindowsSelectorImpl$FdMap.remove(Unknown Source)
>          at 
> sun.nio.ch.WindowsSelectorImpl$FdMap.access$3000(Unknown Source)
>          at sun.nio.ch.WindowsSelectorImpl.implDereg(Unknown Source)
>          at 
> sun.nio.ch.SelectorImpl.processDeregisterQueue(Unknown Source)
>          at sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
>          at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
>          at sun.nio.ch.SelectorImpl.select(Unknown Source)
>          at 
> org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.
> java:1189)
>          at java.lang.Thread.run(Unknown Source)
> 
> Rémy
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
> 
> 
> 



This message is intended only for the use of the person(s) listed above as the intended recipient(s), and may contain information that is PRIVILEGED and CONFIDENTIAL.  If you are not an intended recipient, you may not read, copy, or distribute this message or any attachment. If you received this communication in error, please notify us immediately by e-mail and then delete all copies of this message and any attachments.

In addition you should be aware that ordinary (unencrypted) e-mail sent through the Internet is not secure. Do not send confidential or sensitive information, such as social security numbers, account numbers, personal identification numbers and passwords, to us via ordinary (unencrypted) e-mail.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by de...@hanik.com.
man that sucks,
I can run the same tests, even hundred thousand iterations, I get a little
worse performance than the blocking connector and a little better than the
APR connector, and it never crashes for me.

I'm in dublin at apachecon this week, but will continue as soon I get back,

have a good week!

> Filip Hanik - Dev Lists wrote:
>> yes, I think they are looking pretty good. And I am fairly confident in
>> this new code, as most of it, is old tested APR code.
>
> (Of course, there hasn't been any new changes, so it's not a surprise it
> not working any better for me; I did reboot in the meantime, though, and
> I am running JRE 1.5.0_06)
>
> Performance with NIO is still bad for me (3 times slower than the other
> connectors) with /usr/sbin/ab.exe -n 5000 -c 20 -k
> http://127.0.0.1:8081/tomcat.gif. Throughput seems to vary wildly during
> the test. With /usr/sbin/ab.exe -n 5000 -c 20
> http://127.0.0.1:8081/tomcat.gif (no keepalive), performance is
> horrible, and kills the poller thread after a few hundred iterations
> with the exception I reported earlier:
> Exception in thread "http-8081-Poller-0" java.lang.NullPointerException
>          at sun.nio.ch.WindowsSelectorImpl$FdMap.remove(Unknown Source)
>          at sun.nio.ch.WindowsSelectorImpl$FdMap.access$3000(Unknown
> Source)
>          at sun.nio.ch.WindowsSelectorImpl.implDereg(Unknown Source)
>          at sun.nio.ch.SelectorImpl.processDeregisterQueue(Unknown Source)
>          at sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
>          at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
>          at sun.nio.ch.SelectorImpl.select(Unknown Source)
>          at
> org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1189)
>          at java.lang.Thread.run(Unknown Source)
>
> Rémy
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> yes, I think they are looking pretty good. And I am fairly confident in 
> this new code, as most of it, is old tested APR code.

(Of course, there hasn't been any new changes, so it's not a surprise it 
not working any better for me; I did reboot in the meantime, though, and 
I am running JRE 1.5.0_06)

Performance with NIO is still bad for me (3 times slower than the other 
connectors) with /usr/sbin/ab.exe -n 5000 -c 20 -k 
http://127.0.0.1:8081/tomcat.gif. Throughput seems to vary wildly during 
the test. With /usr/sbin/ab.exe -n 5000 -c 20 
http://127.0.0.1:8081/tomcat.gif (no keepalive), performance is 
horrible, and kills the poller thread after a few hundred iterations 
with the exception I reported earlier:
Exception in thread "http-8081-Poller-0" java.lang.NullPointerException
         at sun.nio.ch.WindowsSelectorImpl$FdMap.remove(Unknown Source)
         at sun.nio.ch.WindowsSelectorImpl$FdMap.access$3000(Unknown Source)
         at sun.nio.ch.WindowsSelectorImpl.implDereg(Unknown Source)
         at sun.nio.ch.SelectorImpl.processDeregisterQueue(Unknown Source)
         at sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
         at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
         at sun.nio.ch.SelectorImpl.select(Unknown Source)
         at 
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1189)
         at java.lang.Thread.run(Unknown Source)

Rémy


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>> Here is another test that I ran from a remote machine, setting 
>> maxThreads="25" and ab concurrency to 50 and keepalive on.
>> In this case, NIO is a lot faster. Turn off keepalive on ab, and we 
>> get similar results to previous run, where BIO is a tad faster.
>
> Scaling the thread per connection model is done by increasing the 
> amount of threads. This particular test demonstrates the obvious.
yes, what is interesting though, is my NIO connector is not really true 
NIO, as it ties up a thread while polling for data. The true 
implementation would have not invoked that thread yet, but for that I 
would have had to rewritten the connector from scratch as I couldn't 
have taken advantage of code already written and tested.
The reason it was done like this, is cause that way I could use almost 
all the code from the APR connector.
So you could call it a semi-"thread-per-connection" model, yet handles 
better than the true thread per connection model.
APR does the same thing.

>
> If I understand the results correctly, the results would be acceptable 
> on Unix.
yes, I think they are looking pretty good. And I am fairly confident in 
this new code, as most of it, is old tested APR code.

Filip

-- 


Filip Hanik

Re: NIO vs BIO speed

Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Here is another test that I ran from a remote machine, setting 
> maxThreads="25" and ab concurrency to 50 and keepalive on.
> In this case, NIO is a lot faster. Turn off keepalive on ab, and we get 
> similar results to previous run, where BIO is a tad faster.

Scaling the thread per connection model is done by increasing the amount 
of threads. This particular test demonstrates the obvious.

If I understand the results correctly, the results would be acceptable 
on Unix.

Rémy


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Here is another test that I ran from a remote machine, setting 
maxThreads="25" and ab concurrency to 50 and keepalive on.
In this case, NIO is a lot faster. Turn off keepalive on ab, and we get 
similar results to previous run, where BIO is a tad faster.

[filip@fedora4 bin]$ ./ab -n 20000 -k -c 50 
http://192.168.3.105:8080/tomcat.gif (BIO)
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, 
http://www.apache.org/

Benchmarking 192.168.3.105 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Finished 20000 requests


Server Software:        Apache-Coyote/1.1
Server Hostname:        192.168.3.105
Server Port:            8080

Document Path:          /tomcat.gif
Document Length:        1934 bytes

Concurrency Level:      50
Time taken for tests:   7.41056 seconds
Complete requests:      20000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    10028
Total transferred:      43239331 bytes
HTML transferred:       38687126 bytes
Requests per second:    2840.48 [#/sec] (mean)
Time per request:       17.603 [ms] (mean)
Time per request:       0.352 [ms] (mean, across all concurrent requests)
Transfer rate:          5996.97 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    5  22.1      0    3009
Processing:     0   11   5.8     11     273
Waiting:        0   10   5.4     11     226
Total:          0   16  22.9     16    3027

Percentage of the requests served within a certain time (ms)
  50%     16
  66%     21
  75%     23
  80%     24
  90%     26
  95%     28
  98%     30
  99%     32
 100%   3027 (longest request)
[filip@fedora4 bin]$ ./ab -n 20000 -k -c 50 
http://192.168.3.105:8081/tomcat.gif
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, 
http://www.apache.org/

Benchmarking 192.168.3.105 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Finished 20000 requests


Server Software:        Apache-Coyote/1.1
Server Hostname:        192.168.3.105
Server Port:            8081

Document Path:          /tomcat.gif
Document Length:        1934 bytes

Concurrency Level:      50
Time taken for tests:   5.526996 seconds
Complete requests:      20000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    20000
Total transferred:      43281248 bytes
HTML transferred:       38681018 bytes
Requests per second:    3618.60 [#/sec] (mean)
Time per request:       13.817 [ms] (mean)
Time per request:       0.276 [ms] (mean, across all concurrent requests)
Transfer rate:          7647.19 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       3
Processing:     1   13   3.7     13     220
Waiting:        1   13   3.7     13     220
Total:          1   13   3.8     13     220

Percentage of the requests served within a certain time (ms)
  50%     13
  66%     14
  75%     15
  80%     16
  90%     17
  95%     19
  98%     21
  99%     22
 100%    220 (longest request)
[filip@fedora4 bin]$


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Re: NIO vs BIO speed

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Remy Maucherat wrote:
> Filip Hanik - Dev Lists wrote:
>> Remy, can you run your tests again, are you still seeing a huge 
>> difference?
>
> Obviously, you did not change anything.
yes, I changed from busy read to polling, I wasn't sure if you ran your 
tests after that.
I've ran my tests both on Windows and on Fedora Core 4, same results on 
both machines.
if the concurrency is lower than maxThreads, bio has between a 1-10% gain.
But as soon as the concurrency goes higher than maxThreads the nio 
connector is way faster, in my case 66% faster.
In the concurrency>maxThreads, my NIO connector is slightly faster than 
the APR connector as well.

Filip

>
> Rémy
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: dev-help@tomcat.apache.org
>
>


-- 


Filip Hanik

Re: NIO vs BIO speed

Posted by Remy Maucherat <re...@apache.org>.
Filip Hanik - Dev Lists wrote:
> Remy, can you run your tests again, are you still seeing a huge difference?

Obviously, you did not change anything.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org