You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by "Parvathaneni, Sireesha" <si...@gmail.com> on 2008/11/18 16:51:05 UTC

Jmeter Reporting

Hi
- Is it possible to run one test controlled by a single Jmeter script and
produce one complete
report.
- Is it possible in one test to easily change levels of threads (users) for
periods of times.
For example: 10 users for 10 minutes, then ramp to 25 users for 10 minutes,
then ramp to 50
users for 10 minutes, then ramp to 100 users for 10 minutes.

- #Runs/Min Response Time/max RT/ average RT / #errors / 95% RT / throughput
/ data transfer
rates

- Is it possible differentiate no.of pass/fail samples for each thread.
Attempted: (Count)
Successful :(count)  Failed: (count)

- Produce graphs of the Jmeter test results - show min, max, average, std.
dev,95%line
    Some examples:
        - Response time vs. elapsed time
        - Concurrent Users vs. elapsed time
        - Errors vs. elapsed time
        - throughput vs. elapsed time
        - Response time vs. #requests/sec
        - Errors vs. #requests/sec
          - Response time vs. Concurrent Users
        - Throughput vs. Concurrent Users
        - response time(min.max,avg) vs request type(lable)
       - Concurrent users vs  #requests/sec

Please let me know if there will be a way to get the above.

Thanks,
Siri

Re: Jmeter Reporting

Posted by mahesh kumar <pm...@gmail.com>.
If you add aggregate report and write the results into a .csv file then it
will write each request along with status code, so depending on the status
code we can find the request is fail/pass. If status code is less than <500
then the request is pass otherwise it is failed and also we can put check
strings to verify the request response.

Come to graphical representation Jmeter will not provide any graphs we need
to use the above data and need to plot the graphs.

--Pmk

On Wed, Nov 19, 2008 at 9:17 PM, Parvathaneni, Sireesha <
sireesha.parvathaneni@gmail.com> wrote:

> ok, and what about the pass/failed count from the tests, Jmeter givign only
> total number of samples done in the test but not the suceesed ones which
> screws the requests/sec results...
>
> And also i need the graphical structure for teh results , is tehre a way
> for
> the above two too...Thanks!
>
> On Tue, Nov 18, 2008 at 11:06 AM, Steve Kapinos
> <St...@tandberg.com>wrote:
>
> > Yes, but such variation and computation isn't going to be turn-key.
> > This will require you to post-process logs to get the details you want.
> > It's probably simplier to make multiple runs with varying inputs, each
> > generating their own result filesets and then post processing them as a
> > group to your preferences.
> >
> > -----Original Message-----
> > From: Parvathaneni, Sireesha [mailto:sireesha.parvathaneni@gmail.com]
> > Sent: Tuesday, November 18, 2008 10:51 AM
> > To: JMeter Users List
> > Subject: Jmeter Reporting
> >
> > Hi
> > - Is it possible to run one test controlled by a single Jmeter script
> > and
> > produce one complete
> > report.
> > - Is it possible in one test to easily change levels of threads (users)
> > for
> > periods of times.
> > For example: 10 users for 10 minutes, then ramp to 25 users for 10
> > minutes,
> > then ramp to 50
> > users for 10 minutes, then ramp to 100 users for 10 minutes.
> >
> > - #Runs/Min Response Time/max RT/ average RT / #errors / 95% RT /
> > throughput
> > / data transfer
> > rates
> >
> > - Is it possible differentiate no.of pass/fail samples for each thread.
> > Attempted: (Count)
> > Successful :(count)  Failed: (count)
> >
> > - Produce graphs of the Jmeter test results - show min, max, average,
> > std.
> > dev,95%line
> >    Some examples:
> >        - Response time vs. elapsed time
> >        - Concurrent Users vs. elapsed time
> >        - Errors vs. elapsed time
> >        - throughput vs. elapsed time
> >        - Response time vs. #requests/sec
> >        - Errors vs. #requests/sec
> >          - Response time vs. Concurrent Users
> >        - Throughput vs. Concurrent Users
> >        - response time(min.max,avg) vs request type(lable)
> >       - Concurrent users vs  #requests/sec
> >
> > Please let me know if there will be a way to get the above.
> >
> > Thanks,
> > Siri
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> >
> >
>

Re: Jmeter Reporting

Posted by "Parvathaneni, Sireesha" <si...@gmail.com>.
ok, and what about the pass/failed count from the tests, Jmeter givign only
total number of samples done in the test but not the suceesed ones which
screws the requests/sec results...

And also i need the graphical structure for teh results , is tehre a way for
the above two too...Thanks!

On Tue, Nov 18, 2008 at 11:06 AM, Steve Kapinos
<St...@tandberg.com>wrote:

> Yes, but such variation and computation isn't going to be turn-key.
> This will require you to post-process logs to get the details you want.
> It's probably simplier to make multiple runs with varying inputs, each
> generating their own result filesets and then post processing them as a
> group to your preferences.
>
> -----Original Message-----
> From: Parvathaneni, Sireesha [mailto:sireesha.parvathaneni@gmail.com]
> Sent: Tuesday, November 18, 2008 10:51 AM
> To: JMeter Users List
> Subject: Jmeter Reporting
>
> Hi
> - Is it possible to run one test controlled by a single Jmeter script
> and
> produce one complete
> report.
> - Is it possible in one test to easily change levels of threads (users)
> for
> periods of times.
> For example: 10 users for 10 minutes, then ramp to 25 users for 10
> minutes,
> then ramp to 50
> users for 10 minutes, then ramp to 100 users for 10 minutes.
>
> - #Runs/Min Response Time/max RT/ average RT / #errors / 95% RT /
> throughput
> / data transfer
> rates
>
> - Is it possible differentiate no.of pass/fail samples for each thread.
> Attempted: (Count)
> Successful :(count)  Failed: (count)
>
> - Produce graphs of the Jmeter test results - show min, max, average,
> std.
> dev,95%line
>    Some examples:
>        - Response time vs. elapsed time
>        - Concurrent Users vs. elapsed time
>        - Errors vs. elapsed time
>        - throughput vs. elapsed time
>        - Response time vs. #requests/sec
>        - Errors vs. #requests/sec
>          - Response time vs. Concurrent Users
>        - Throughput vs. Concurrent Users
>        - response time(min.max,avg) vs request type(lable)
>       - Concurrent users vs  #requests/sec
>
> Please let me know if there will be a way to get the above.
>
> Thanks,
> Siri
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

RE: Jmeter Reporting

Posted by Steve Kapinos <St...@tandberg.com>.
Yes, but such variation and computation isn't going to be turn-key.
This will require you to post-process logs to get the details you want.
It's probably simplier to make multiple runs with varying inputs, each
generating their own result filesets and then post processing them as a
group to your preferences.

-----Original Message-----
From: Parvathaneni, Sireesha [mailto:sireesha.parvathaneni@gmail.com] 
Sent: Tuesday, November 18, 2008 10:51 AM
To: JMeter Users List
Subject: Jmeter Reporting

Hi
- Is it possible to run one test controlled by a single Jmeter script
and
produce one complete
report.
- Is it possible in one test to easily change levels of threads (users)
for
periods of times.
For example: 10 users for 10 minutes, then ramp to 25 users for 10
minutes,
then ramp to 50
users for 10 minutes, then ramp to 100 users for 10 minutes.

- #Runs/Min Response Time/max RT/ average RT / #errors / 95% RT /
throughput
/ data transfer
rates

- Is it possible differentiate no.of pass/fail samples for each thread.
Attempted: (Count)
Successful :(count)  Failed: (count)

- Produce graphs of the Jmeter test results - show min, max, average,
std.
dev,95%line
    Some examples:
        - Response time vs. elapsed time
        - Concurrent Users vs. elapsed time
        - Errors vs. elapsed time
        - throughput vs. elapsed time
        - Response time vs. #requests/sec
        - Errors vs. #requests/sec
          - Response time vs. Concurrent Users
        - Throughput vs. Concurrent Users
        - response time(min.max,avg) vs request type(lable)
       - Concurrent users vs  #requests/sec

Please let me know if there will be a way to get the above.

Thanks,
Siri

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Jmeter Reporting

Posted by mahesh kumar <pm...@gmail.com>.
Hi,

You can create different thread groups and give different remp up and
different schedular so that once one thread complete the first ramp up then
next thread group will start the test.like the we need to create different
thread groups for diff ramp ups..this is just one way to solve the issue but
not accurate :)
--Mahesh

On Tue, Nov 18, 2008 at 9:21 PM, Parvathaneni, Sireesha <
sireesha.parvathaneni@gmail.com> wrote:

> Hi
> - Is it possible to run one test controlled by a single Jmeter script and
> produce one complete
> report.
> - Is it possible in one test to easily change levels of threads (users) for
> periods of times.
> For example: 10 users for 10 minutes, then ramp to 25 users for 10 minutes,
> then ramp to 50
> users for 10 minutes, then ramp to 100 users for 10 minutes.
>
> - #Runs/Min Response Time/max RT/ average RT / #errors / 95% RT /
> throughput
> / data transfer
> rates
>
> - Is it possible differentiate no.of pass/fail samples for each thread.
> Attempted: (Count)
> Successful :(count)  Failed: (count)
>
> - Produce graphs of the Jmeter test results - show min, max, average, std.
> dev,95%line
>    Some examples:
>        - Response time vs. elapsed time
>        - Concurrent Users vs. elapsed time
>        - Errors vs. elapsed time
>        - throughput vs. elapsed time
>        - Response time vs. #requests/sec
>        - Errors vs. #requests/sec
>          - Response time vs. Concurrent Users
>        - Throughput vs. Concurrent Users
>        - response time(min.max,avg) vs request type(lable)
>       - Concurrent users vs  #requests/sec
>
> Please let me know if there will be a way to get the above.
>
> Thanks,
> Siri
>