You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tomcat.apache.org by Roy Wilson <de...@bellatlantic.net> on 2000/11/11 19:34:32 UTC

What does ab measure, etc.


>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<

On 11/11/00, 11:15:21 AM, cmanolache@yahoo.com wrote regarding Re: 
Maximum processing times:


> As you said, all this is philosophy and personal taste.

> My point was that 9 sec max response time is _bad_ - it doesn't matter if
> it happens for 1% or 0.1% of the users.

Agreed that 9sec is bad, but if it happens to 10% of the users that's 
worse (from a loss standpoint) than if it's 1%.

> Many sites do have something to do with making/losing money - and 9 sec (
> overhead only in tomcat !! - you must add the application overhead ) is
> more than a normal person will want to wait.

This is where I don't follow you. As far as I can tell from looking at 
the code, ab measures from connection attempt to receipt of response. To 
me that implies that servlet processing time must be included. OTOH, my 
understanding of socker related processing could be better :-). Please 
clarify.

> Of course, here it comes the hardware issue - you can limit the number of
> connections to 20/instance ( since at 20 the performance is decent ) and
> use a bigger pool. Or buy faster hardware. ( or choose a different
> container - rasin, orion are known or claim to be very fast ).

WIRED magazine had a piece in the August 2K issue that talked about how 
Loudcloud grew out of watching sites link up to AOL and then crash 
because they couldn't handle the additional load. Apparently no one had 
thought that this might be an issue. Should those interested in making 
money size their system through a trial-and-error process? The question 
becomes: what is the best time to decide in favor of one of the decisions 
you mention?

> Anyway, I'm happy we're having this discussion.

Same here.

> What about using JMeter - it shows you a nice graph of response times (
> and if you enable verbose GC you'll notice some patterns :-). (That's why
> so much time was spent in 3.x changing the architecture for more reuse.)

I've got OptimizeIt to figure out, then I'll look at Jmeter. Can it be 
used in local host mode like ab?

> Some time ago I used a Perl program ( that was testing a real application
> - i.e. did login, accessed a number of pages in a certain order, etc) and
> saved all response times in a file, then used StarOffice (the Excel side
> ) to do nice graphs.

I might check back with you later on that app.

> If you have the time ( because it's going to take a huge amount of time
> ) - I'm sure the data will be much better. That's the problem with
> performance tuning - you save response time, but it's taking (too much
> of) your own time...

What an irony :-)

Roy


> Costin




> > Costin, good point about the importance of the maximum, as Craig also
> > noted. Here's the data (all times in ms) I left out in the today's
> > earlier post on ReqInfoExample:
> >
> > C             avg max max/avg         avg     max   max/avg
> >               con con     ratio               proc    proc     ratio
> > 1             0       4       4+      12      100      8.3
> > 10            0       47    47+       147     190     1.29
> > 20            0       42    42+       291    3361    11.55
> > 30            0       4       4+      441    9368    21.24
> > 40            0       5       5+      612    9732    15.90
> >
> > Here's some data also for HelloWorldExample (C is less than 30 because of
> > thread dumping)
> >
> > 1     0       5       5+      25      484     19.36
> > 10    0       130     130+    138     393     2.85
> > 20    0       128     128+    316     3240    10.25
> >
> > So, what is a "good" max/avg ratio? And, for what machine? I'd be
> > surprised if someone saw these ratios on a Pentium 650Mhz.
> >
> > BTW, it is possible to calculate (after making some assumptions) the
> > percentage of requests that will have response times larger than some
> > value (like 10 - Z seconds, where Z represents some level of network
> > delay).
> >
> > Roy
> >
> > Roy Wilson
> > E-mail: designrw@bellatlantic.net
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
> >


> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org

Here we go again :-)

Posted by Roy Wilson <de...@bellatlantic.net>.
Costin,

See below.

> Yes, losing 100 customers per day is worse than losing 10. But you should
> design your server in a way that will have all times less than 9 secs, I
> see no point of running a server if you know that 1% of requests will 
take
> 9 secs.

OK. Load levels are unpredictable, so you probably can't rule out a few 9 
sec response times in advance. To me, that requires a probability 
calculation or earplugs to keep from hearing users scream :-). 

<snip>

> Yes, ab measures the time it takes to send a request and receive the
> response.

> If your servlet is doing a database access or something like that you 
have
> to add this time - if the Ping servlet  takes 2 sec, and a database query
> takes 1 sec you'll probably get about 3 sec per request.

Here's where I need additional clarification. I assume that, in localhost 
mode (ignoring TCIP processing time for loopback interface), the time 
from sending the request by ab to the receipt of the complete response by 
ab occurs in a single thread, so that the elapsed time calculated by ab 
would be (using your example) 2 seconds. 

If you are saying that database activity isn't fully synchronized with 
the servlet, so that the database may be busy doing something after 
sending data back to the servlet, I agree. But that affects system cpu 
and disk utilization, not start to end servlet processing time. But the 
database processing time needed to search, retrieve, and send data to the 
servlet would be included in the end-to-end time recorded by ab. Again, 
this is based on my reading of the ab code. I hope anybody listening will 
set me straight with a procedure/lineno reference to ab if I am mistaken 
(which rarely happens :-)). 

<snip>

> > > What about using JMeter - it shows you a nice graph of response times (
> > > and if you enable verbose GC you'll notice some patterns :-). (That's why
> > > so much time was spent in 3.x changing the architecture for more reuse.)
> >
> > I've got OptimizeIt to figure out, then I'll look at Jmeter. Can it be
> > used in local host mode like ab?

> Yes, but it's a bit harder to use it with many connections ( it's nice up
> to 20 ). I use to combine it with ab ( i.e. use ab to load the server,
> like 40 concurent connections, and JMeter to display a chart with
> additional 10 connections ).

Thanks for the info. I'll try it.

<snip>

> What's in a bad shape is parameter and cookie handling - but that's not
> shown in a simple request ( but in a simple request with parameters :-).

Thanks for the tips.

> It's "proprietary code", and I don't have it ( one of the previous jobs 
).
> But it's very easy to write a small program to do that.

I'll plan to get to that in the future.

> I was thinking to an "ant" task, like the GTest used in tomcat's tests 
and
> watchdog. ( few enhancements are needed ). Then you can do your
> "scripts" in xml.

I'll take a look at Gtest. I just started fiddling with ANT today. Like 
an idiot I took a full-blown build.xml used to generate a production 
system and wondered why I couldn't instantaneously see what was what. 
Then [light bulb flashes] I thought, "Why not start simple from scratch?" 
As the General Electric Corporation in the US could have said "(Slow) 
Progress is our most Important Product". :-)

Roy
-- 
Roy Wilson
E-mail: designrw@bellatlantic.net

Re: What does ab measure, etc.

Posted by cm...@yahoo.com.
> > My point was that 9 sec max response time is _bad_ - it doesn't matter if
> > it happens for 1% or 0.1% of the users.
> 
> Agreed that 9sec is bad, but if it happens to 10% of the users that's 
> worse (from a loss standpoint) than if it's 1%.

Yes, losing 100 customers per day is worse than losing 10. But you should
design your server in a way that will have all times less than 9 secs, I
see no point of running a server if you know that 1% of requests will take
9 secs.


> > Many sites do have something to do with making/losing money - and 9 sec (
> > overhead only in tomcat !! - you must add the application overhead ) is
> > more than a normal person will want to wait.
> 
> This is where I don't follow you. As far as I can tell from looking at 
> the code, ab measures from connection attempt to receipt of response. To 
> me that implies that servlet processing time must be included. OTOH, my 
> understanding of socker related processing could be better :-). Please 
> clarify.

Yes, ab measures the time it takes to send a request and receive the
response. 

If your servlet is doing a database access or something like that you have
to add this time - if the Ping servlet  takes 2 sec, and a database query
takes 1 sec you'll probably get about 3 sec per request.


> WIRED magazine had a piece in the August 2K issue that talked about how 
> Loudcloud grew out of watching sites link up to AOL and then crash 
> because they couldn't handle the additional load. Apparently no one had 
> thought that this might be an issue. Should those interested in making 
> money size their system through a trial-and-error process? The question 
> becomes: what is the best time to decide in favor of one of the decisions 
> you mention?

And you may add the slashdot factor :-) Or Christmas. 

> > What about using JMeter - it shows you a nice graph of response times (
> > and if you enable verbose GC you'll notice some patterns :-). (That's why
> > so much time was spent in 3.x changing the architecture for more reuse.)
> 
> I've got OptimizeIt to figure out, then I'll look at Jmeter. Can it be 
> used in local host mode like ab?

Yes, but it's a bit harder to use it with many connections ( it's nice up
to 20 ). I use to combine it with ab ( i.e. use ab to load the server,
like 40 concurent connections, and JMeter to display a chart with
additional 10 connections ).

OptimizeIt is a very nice tool to find what's wrong with the code - but
it's good up to a point. For example tomcat3.3 have only a very small
ammount of garbage per request ( and distributed in many places in the
code ), and most of the garbage will be removed when we finish with 
String->MessageByte conversion. After that I think we'll be very close to
0 GC, and probably you'll not see any more changes by reducing the memory 
usage ( it's already very low ).

Regarding CPU use, it's also well distributed now ( with 2 or 3 hotspots
in byte-char, and that's in process to be solved ).

What's in a bad shape is parameter and cookie handling - but that's not
shown in a simple request ( but in a simple request with parameters :-).

> > Some time ago I used a Perl program ( that was testing a real application
> > - i.e. did login, accessed a number of pages in a certain order, etc) and
> > saved all response times in a file, then used StarOffice (the Excel side
> > ) to do nice graphs.
> 
> I might check back with you later on that app.

It's "proprietary code", and I don't have it ( one of the previous jobs ).
But it's very easy to write a small program to do that.

I was thinking to an "ant" task, like the GTest used in tomcat's tests and
watchdog. ( few enhancements are needed ). Then you can do your
"scripts" in xml.


Costin