You are viewing a plain text version of this content. The canonical link for it is here.
Posted to apreq-dev@httpd.apache.org by Joe Schaefer <jo...@sunstarsys.com> on 2004/07/23 01:47:03 UTC

micro benchmarks, redux

After being peeved by something I read on the modperl@ list
recently, here's a rehash of the apreq1 micro benchmarks Stas 
ran way back when:

<URL: http://perl.apache.org/docs/1.0/guide/performance.html
  #Apache__args_vs__Apache__Request__param_vs__CGI__param >

I just ran them on apreq2 to see how the numbers compared, but 
first here's Stas' table reorganized by test:

Apache 1 (prefork):

     TEST PARAMETERS       REQUESTS / SEC     SPEED RATIO
  val_len pairs query_len   cgi_pm  apreq1  (apreq1/cgi_pm)

    10      2      25        559     945        169 %
    50      2     105        573     907        158 %
     5     26     207        263     754        286 %
    10     26     337        262     742        283 %


Here's my results using the exact same tests, with current cvs 
with magic keys disabled (it turns out that magic keys actually hurt 
performance a little, so I'm going to remove them from cvs soon):

Apache 2 (prefork):

     TEST PARAMETERS       REQUESTS / SEC     SPEED RATIO
  val_len pairs query_len   cgi_pm  apreq2  (apreq2/cgi_pm)

    10      2      25        1906    4349       228 %
    50      2     105        1893    3569       188 %
     5     26     207        1061    2781       262 %
    10     26     337        1047    2743       261 %


FWIW, I'also ran the above benchmarks as POST tests instead 
of GET, and the numbers scaled about the same way (apreq2 
being ~2-3 times faster).  I also took the mfd data in t/parsers, 
put it in a file and benchmarked the handlers again with 

  % ab -n 5000 -c 50 -T "multipart/form-data; boundary=AaB03x" -p
    mfd_post_data $URL

and got

     REQUESTS / SEC  SPEED RATIO
     cgi_pm  apreq2  (apreq2/cgi_pm)

      894     3260      364 %

which is a little better.


My immediate reaction is that this is all pretty good news, 
because despite the fact that apreq2 is about an order of magnitude
more complex than apreq1, we've managed to keep the performance
gain over cgi_pm about the same (or better) than it was for apreq1, 
even on silly little benchmarks like these. I expect the gap to
be even wider for real-world data, but I'm satisfied that we've 
done pretty well with apreq2 so far.

-- 
Joe Schaefer


Re: micro benchmarks, redux

Posted by Joe Schaefer <jo...@sunstarsys.com>.
Stas Bekman <st...@stason.org> writes:

[...]

> Neat. Though for some reason I was expecting v2 to be faster :)

There's only so much you can do with a tiny query string :-).  At this
level the apreq2 C-machinery isn't really kicking in.  The gap should 
widen signifcantly as you increase either the size of the POST
data or the number of parameters- that's where the zero-copy design 
will start to show its colors.


> I need to rewrite my old crafty Apache::Benchmark to use Apache-Test,
> and then we could keep the actual benchmarks somewhere in cvs. 
> But I probably won't have time for that in the near future, so if
> someone wants to take up on this job let me know and I'll give you the
> latest version of Apache::Benchmark. (I wrote it originally for the
> practical mod_perl book)

Now that we've got the --enable-profiling configure option, it would be
nice to actually have some code in cvs that stress-tests the library a
bit.  Otherwise it's probably pretty hard to guess where the current
bottlenecks are.

> The good thing about keeping those around, is to run them as you
> add/remove features, to make sure things don't get slower.

Sounds quite handy- hopefully someone will take you up on it.

-- 
Joe Schaefer


Re: micro benchmarks, redux

Posted by Stas Bekman <st...@stason.org>.
Joe Schaefer wrote:

> My immediate reaction is that this is all pretty good news, 
> because despite the fact that apreq2 is about an order of magnitude
> more complex than apreq1, we've managed to keep the performance
> gain over cgi_pm about the same (or better) than it was for apreq1, 
> even on silly little benchmarks like these. I expect the gap to
> be even wider for real-world data, but I'm satisfied that we've 
> done pretty well with apreq2 so far.

Neat. Though for some reason I was expecting v2 to be faster :)

I need to rewrite my old crafty Apache::Benchmark to use Apache-Test, 
and then we could keep the actual benchmarks somewhere in cvs. But I 
probably won't have time for that in the near future, so if someone 
wants to take up on this job let me know and I'll give you the latest 
version of Apache::Benchmark. (I wrote it originally for the practical 
mod_perl book)

The good thing about keeping those around, is to run them as you 
add/remove features, to make sure things don't get slower.

-- 
__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com