You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by "Christopher K. St. John" <ck...@distributopia.com> on 2002/06/24 18:38:48 UTC

Performance Test Workload: Small Static File

 I've changed the subject line since this is moving away
from the proposal.


costinm@covalent.net wrote:
> 
> And I can assure you that everyone
> working on performance seriously is running those test
> and evaluating the performance periodically.
> 

 Nah, I'm not going to take your word for it. Taking your
assurance on performance would be unprofessional. That
doesn't mean I think you're dumb, it means I don't trust
anybody's word on performance without more information.

 But how about this: I'll show you mine if you show me
yours. Post the workload you've used to measure the
performance improvement, and I'll post the one I used for
testing static file serving performance. Actual results
can wait, I just want to see the benchmark.

 If you're really using ab against HelloWorldServlet, then
just say it that way, one sentence, no need to get all
formal.

 I have to dig up my notes, but here's a peek at mine:

 - http_load used to retrieve a very small http file. This
 isn't necessarily a servlet test. In Tomcat's case it
 ends up testing things like:

   - How fast the defaultservlet runs. This is
   uninteresting to many people, but very important to
   others.

   - How fast the http/ajp13 connector code runs.

   - The speed of the network stack on your test
     computers.

 - I call the results bogo-rps (request per second),
 because it's a totally bogus way to measure system
 performance. (But it's useful for doing specific kinds
 of tuning)

 - results and details of test setup are a separate issue,
 I need to re-run against Coyote in any case. But at
 least:

  - test with and without an Apache front-end

    - Apache serving static files (not a tomcat test at
      all, but a baseline)
    - Apache mapping static file serving to tomcat
    - Tomcat standalone
 
  - No other workload on Tomcat. Restart for
    every run. (but include "warmup" so hotspot
    stabilizes)

  - Test run on at least two machines (loopback interface
    skews results)


> Please stop this line of arguments - I personally feel
> you treat me like a stupid who doesn't know anything
> about that and has to be reminded of the basics.
>

 I was responding to Remy's -1. Restating the obvious isn't
a personal slam, it's a way to reveal hidden assumptions
and find the roots of a technical disagreement.

 It also helps non-specialists to follow the lines of
reasoning (there are 1000's of other people on this list,
and I wasn't just talkig to Remy). It's an
entirely reasonable thing to do, and I'm certainly not
going to stop. It is very definitely not meant to imply
that you, Costin, personally don't know this stuff, and I
will try to be more clear about that in future posts.


-- 
Christopher St. John cks@distributopia.com
DistribuTopia http://www.distributopia.com

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: Performance Test Workload: Small Static File

Posted by co...@covalent.net.
On Mon, 24 Jun 2002, Christopher K.  St.  John wrote:

> > And I can assure you that everyone
> > working on performance seriously is running those test
> > and evaluating the performance periodically.
> > 
> 
>  Nah, I'm not going to take your word for it. Taking your
> assurance on performance would be unprofessional. That
> doesn't mean I think you're dumb, it means I don't trust
> anybody's word on performance without more information.

:-)

What you don't believe ? That I actually do test performance
periodically ? That my tests shows that the factors that matter
for me are improving ? 
 
>  But how about this: I'll show you mine if you show me
> yours. Post the workload you've used to measure the
> performance improvement, and I'll post the one I used for
> testing static file serving performance. Actual results
> can wait, I just want to see the benchmark.

I'm not using tomcat for static files, so I don't test/care
too much about it. 

What I test most of the time is:
- overhead of a servlet ( HelloW - 2 test cases, one with writer one
with stream )
- overhead of a jsp 
- startup time ( I like a small number here )

I test with the http1.0 impl. in 3.3 as a reference, sometimes
as php/mod_perl equivalent, a static page with the same content
served by apache2.0. And of course mod_jk in various combinations
( allwasy socket channel, sometimes the unix channel and jni ).

>  If you're really using ab against HelloWorldServlet, then
> just say it that way, one sentence, no need to get all
> formal.

I don't - HelloWorldSerlvet is calling some resource boundle code,
and that's extra overhead.

I use the HelloW servlet ( which only output, nothing else), 
a jsp with the same content and a servlet that uses stream for 
output. That's for testing container overhead.

I use ab most of the time, but sometimes JMeter ( for the nice
graphs - and better showing peaks ).


>  - http_load used to retrieve a very small http file. This
>  isn't necessarily a servlet test. In Tomcat's case it
>  ends up testing things like:
> 
>    - How fast the defaultservlet runs. This is
>    uninteresting to many people, but very important to
>    others.
> 
>    - How fast the http/ajp13 connector code runs.
> 
>    - The speed of the network stack on your test
>      computers.
> 
>  - I call the results bogo-rps (request per second),
>  because it's a totally bogus way to measure system
>  performance. (But it's useful for doing specific kinds
>  of tuning)
> 
>  - results and details of test setup are a separate issue,
>  I need to re-run against Coyote in any case. But at
>  least:
> 
>   - test with and without an Apache front-end
> 
>     - Apache serving static files (not a tomcat test at
>       all, but a baseline)
>     - Apache mapping static file serving to tomcat
>     - Tomcat standalone
>  
>   - No other workload on Tomcat. Restart for
>     every run. (but include "warmup" so hotspot
>     stabilizes)
> 
>   - Test run on at least two machines (loopback interface
>     skews results)

That's a perfectly valid test case. Not the perfect one
( as I don't think such a thing exists ). 

Are you saying that with this test scenario you can't see
any improvements between 3.2 and 3.3 or between 4.0 and 4.1 ? 

>  I was responding to Remy's -1. Restating the obvious isn't
> a personal slam, it's a way to reveal hidden assumptions
> and find the roots of a technical disagreement.

Remy's -1 refered to the scope ( i.e. he believes a performance test  
should be in commons ).

I agree with your position - that it should be included in the tomcat5
test suite, if someone is willing to do the work ( i.e. you :-). 

Running the tests is very time consuming, and publishing results 
or creating a formal framework is even more work - I prefer
to stick with my own private tests.


>  It also helps non-specialists to follow the lines of reasoning (there
> are 1000's of other people on this list, and I wasn't just talkig to
> Remy). It's an entirely reasonable thing to do, and I'm certainly not
> going to stop. It is very definitely not meant to imply that you,
> Costin, personally don't know this stuff, and I will try to be more
> clear about that in future posts.

The 5.0 proposal is intended for tomcat commiters who are actively working
on the code. I think we can safely assume that most of us know the
basics, and we do our homeworks. 

And the goal is to get an agreement on the next tomcat - and create 
a plan that is acceptable to all. I don't think putting numbers
on the performance goal is helping in any way - what's important 
is agreeing that we want more than we have, and this is a goal
for 5.0.

Implementation details can be discussed separately.

Costin




--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>