You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Ivan Rancati <iv...@sharpmind.de> on 2004/06/03 11:31:40 UTC

Beginner: Aggregate report vs. View results in Table

good morning,

I have some beginner doubts on how exactly to read measurements for a 
simple Fast CGI website (or perhaps I don't understand how to use 
loops), and I did not find an exact answer on the documentation.
My test plan is simple, I have a thread group with 20 threads, no ramp 
up time, loop 15 times
then
Interleave controller (ignore sub-controller blocks is selected)
    HTTP request 1
    HTTP request 2
    HTTP request 3
    HTTP request  4
View Results in table
View results in tree
Aggregate report

the 4 http requests go to FastCGI pages that have very similar execution 
times, and for each page I have a simple Response Assertion (same for 
all 4 pages)

At the end of the run, View results in Table gives me an average of 1884 
ms for the 300 samples.
Aggregate Report gives me a thoughput rate between 2.2/sec and 2.8/sec 
for each of the 4 http requests, and a total rate of 10.1/sec.

Now, if 10.1 total requests are processed in a second, and I have 20 
threads, where does the average sample time of 1884 ms come from? I 
would expect something like
99 ms (since throughput is 10.1 requests in a second), or
1980, which is 99 ms times 20 (the number of threads)

Also, occasionally, in View Results in Table, I see a sample time of 0 
ms, which looks odd.

I am running JMeter 2.0.1 on German Windows 2000 (with latest service 
packs) and JVM 1.4.2_04. The server is a Linux box with Apache and Fast CGI

Apologies if this is already covered in some faq or in the docs, I could 
not find it.

thanks and best regards,
Ivan Rancati
QA engineer - SharpMind.de

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Beginner: Aggregate report vs. View results in Table

Posted by Michael Stover <ms...@apache.org>.
20 * 99 = 1980
average response = 1884
diff = 96ms or 
     = 1884/20 = 94.2 - 99 = 4.8 ms/request

Rememeber, JMeter is calculating actual throughput - this includes time
spent on the jmeter side doing stuff.  So, about 4.8 ms/request are
spent in context switching and other JMeter processing.

-Mike

On Thu, 2004-06-03 at 05:31, Ivan Rancati wrote:
> good morning,
> 
> I have some beginner doubts on how exactly to read measurements for a 
> simple Fast CGI website (or perhaps I don't understand how to use 
> loops), and I did not find an exact answer on the documentation.
> My test plan is simple, I have a thread group with 20 threads, no ramp 
> up time, loop 15 times
> then
> Interleave controller (ignore sub-controller blocks is selected)
>     HTTP request 1
>     HTTP request 2
>     HTTP request 3
>     HTTP request  4
> View Results in table
> View results in tree
> Aggregate report
> 
> the 4 http requests go to FastCGI pages that have very similar execution 
> times, and for each page I have a simple Response Assertion (same for 
> all 4 pages)
> 
> At the end of the run, View results in Table gives me an average of 1884 
> ms for the 300 samples.
> Aggregate Report gives me a thoughput rate between 2.2/sec and 2.8/sec 
> for each of the 4 http requests, and a total rate of 10.1/sec.
> 
> Now, if 10.1 total requests are processed in a second, and I have 20 
> threads, where does the average sample time of 1884 ms come from? I 
> would expect something like
> 99 ms (since throughput is 10.1 requests in a second), or
> 1980, which is 99 ms times 20 (the number of threads)
> 
> Also, occasionally, in View Results in Table, I see a sample time of 0 
> ms, which looks odd.
> 
> I am running JMeter 2.0.1 on German Windows 2000 (with latest service 
> packs) and JVM 1.4.2_04. The server is a Linux box with Apache and Fast CGI
> 
> Apologies if this is already covered in some faq or in the docs, I could 
> not find it.
> 
> thanks and best regards,
> Ivan Rancati
> QA engineer - SharpMind.de
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
-- 
Michael Stover <ms...@apache.org>
Apache Software Foundation


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


To loop or not to loop, anwer my question.

Posted by Steve Luong <lu...@hotmail.com>.
Hey guys,
I need a little help here.  Can someone explain to be what the difference
between looping and running each test case individually?

I set up a test case to hit an SSL connection 50 times in a second.  I ran
the test without looping 3 times and took the average.  Then I loop the test
3 times and took the average.  The single runs ran with no errors and a
lower average time than the loop run.  

Which would be the best possible way to find the average time of the run.
Loop or no loop?

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Beginner: Aggregate report vs. View results in Table

Posted by peter lin <jm...@yahoo.com>.
 
keep in mind the average response time is not a true representation. The actual throughput might be more useful in this case. Most likely the response time are varying dramatically, which causes the average to be higher than what you expect. In the nightly build, there's a new distribution graph I just added last week.
 
this may give you a better picture of what is happening. In the past I've seen this happen when the dataset clumps at opposite ends of min/max response times. Even though 50% of the requests may be finishing within 100ms, the average can be skewed if 20% of the response times are un-usually long.
 
I hope that helps.
 
peter


Ivan Rancati <iv...@sharpmind.de> wrote:
good morning,

I have some beginner doubts on how exactly to read measurements for a 
simple Fast CGI website (or perhaps I don't understand how to use 
loops), and I did not find an exact answer on the documentation.
My test plan is simple, I have a thread group with 20 threads, no ramp 
up time, loop 15 times
then
Interleave controller (ignore sub-controller blocks is selected)
HTTP request 1
HTTP request 2
HTTP request 3
HTTP request 4
View Results in table
View results in tree
Aggregate report

the 4 http requests go to FastCGI pages that have very similar execution 
times, and for each page I have a simple Response Assertion (same for 
all 4 pages)

At the end of the run, View results in Table gives me an average of 1884 
ms for the 300 samples.
Aggregate Report gives me a thoughput rate between 2.2/sec and 2.8/sec 
for each of the 4 http requests, and a total rate of 10.1/sec.

Now, if 10.1 total requests are processed in a second, and I have 20 
threads, where does the average sample time of 1884 ms come from? I 
would expect something like
99 ms (since throughput is 10.1 requests in a second), or
1980, which is 99 ms times 20 (the number of threads)

Also, occasionally, in View Results in Table, I see a sample time of 0 
ms, which looks odd.

I am running JMeter 2.0.1 on German Windows 2000 (with latest service 
packs) and JVM 1.4.2_04. The server is a Linux box with Apache and Fast CGI

Apologies if this is already covered in some faq or in the docs, I could 
not find it.

thanks and best regards,
Ivan Rancati
QA engineer - SharpMind.de

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org

		
---------------------------------
Do you Yahoo!?
Friends.  Fun. Try the all-new Yahoo! Messenger