You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Erik Pragt <er...@jworks.nl> on 2012/03/20 17:21:38 UTC

Display total execution time for test plan

Hi all,

I've created a test plan to put some load on a flow of pages we have.
I'm quite new to JMeter, and I have a small question on how to get the
information I'm looking for. I've got a working test plan, I can see
the samples, the throughput, etc, but I can't find anywhere what the
time was to execute this testplan, or a single loop of this testplan
when I execute it multiple times.

Can someone give me a small heads up how I can record and view this time?

Kind regards,

Erik Pragt

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


RE: Display total execution time for test plan

Posted by "Robin D. Wilson" <rw...@gmail.com>.
Sorry, didn't mean to offend with the 'summarily closed' comment. Poor choice of words.

As for reproducing the problem - I have more-or-less confirmed that the problem is related to that setting somehow. Since JMeter 2.4
doesn't have that setting, I can't test it with that.

But basically what I was seeing is that in JMeter 2.4, I got substantially higher 'throughput' and lower 'average' times - but when
I started calculating the times and throughput based on the overall test duration - the average and throughput numbers didn't make
sense. When I tested the same test case on JMeter 2.6, I had similar problems with the average times, but the throughput numbers
(much lower than with JM2.4) lined up properly with the overall test duration I was seeing. After changing that setting, both the
throughput and average numbers were inline with the overall test duration.

Recall that I also mentioned an issue where I created a test case that inserts a 2 second delay into the response of a web request
(at the web server). When I tested  a simple test case against that request - it was showing average times of ~600ms - which was
impossible because the minimum time was 2 seconds (the built-in delay in the response). Once I changed that setting, average
response times showed ~2400ms, which was in-line with what I expected.

--
Robin D. Wilson
Sr. Director of Web Development
KingsIsle Entertainment, Inc.
VOICE: 512-777-1861
www.KingsIsle.com

-----Original Message-----
From: Philippe Mouawad [mailto:philippe.mouawad@gmail.com] 
Sent: Wednesday, March 21, 2012 10:34 AM
To: JMeter Users List
Subject: Re: Display total execution time for test plan

Hello,
This is the original thread:

   - http://www.mail-archive.com/user@jmeter.apache.org/msg01008.html

Regarding the "(*it was summarily closed* when they couldn't reproduce the
same problem)", it was not "summarily closed" !!!, it was closed after
investigation of 3 commiters , see:

   - https://issues.apache.org/bugzilla/show_bug.cgi?id=52189


See Rainer Jung comment and investigation.
And I on my side did profiling on 2 versions and comparison that didn't
show the mentionned difference.
Sebb also did.
Sebb finaly answered that the 2 settings (I gave you) :

   - sampleresult.useNanoTime=false
   - sampleresult.nanoThreadSleep=0

were not an explanation for the problem you mention, although you answered
they had fixed the issue on response time you faced (which is another
issue).

BUT, if you are able to reproduce the slowliness between 2.5.1 (2.6 ?) and
2.4 with a usable Test Plan, then feel free to reopen issue.


Regards
Philippe M.
http://www.ubik-ingenierie.com

On Wed, Mar 21, 2012 at 4:23 PM, Robin D. Wilson <rw...@gmail.com> wrote:

> I did raise a thread on this list when I noticed the behavior. I even
> created a bug for it (it was summarily closed when they couldn't reproduce
> the same problem). I eventually narrowed down the problem to the config
> settings I showed. (I can't remember for sure, but I think you even
> responded to my thread... I know sebb did, but I think I even recall you
> responding too.)
>
> NOTE: this problem is apparent _only_ when running JMeter on WinXP. I did
> not see the same issue on Win7.
>
> All that being said, I completely disagree about all other factors
> affecting the test times - that assumes a complex test, and that I haven't
> accounted for those in my script. My benchmarks are specifically configured
> for a very limited set of test variables, and they are designed to test the
> same thing each time. My test environment is configured so that I can limit
> other variables from influencing my tests. I can virtually guarantee that
> if I run the same test 10 times (or 100 times), the test duration will vary
> by less than 1% on each iteration (and less than 1% between any given test
> runs). If it does vary by more than that, I start looking for problems in
> my test or my code.
>
> As an example, I have tests that test only "login" on my web system. I get
> the home page, POST a login, and get the home page again (after the user
> has logged in). My site is a very high volume web site (millions of pages
> per day). I need to know if I've introduce any delays in the login process
> - because that will adversely affect the end-user experience for my
> customers. On multiple occasions I've identified DB queries that had been
> improperly indexed because of this test. Likewise I have specific test
> cases for 'registrations', 'forgot password', 'forums reply' and many other
> use cases - each one very limited in scope, and each one very specific in
> its test configuration. Each one exercises a very specific portion of the
> system - and each one tells me if my developers have screwed up something
> in the code they've delivered for the 'next' release.
>
> I run each of these test cases for each new version of our system. I get a
> benchmark of the performance of each version. The overall test duration is
> a good 'aggregate' measure of that benchmark. I can guarantee if the test
> duration increases significantly, something is wrong. (And if it goes down
> significantly, something is potentially wrong with the test, or we've done
> a really good job with the code for that release.)
>
> I'll agree, significant changes in the test duration do not tell me
> explicitly what is wrong - but it does flag that I need to look deeper. So
> it is a good bellwether for checking my work.
>
> Please accept that there are ways to test that you might not need - but
> that are still useful to others.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> Sent: Wednesday, March 21, 2012 9:46 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi Robin,
>
> We've all had situations were calculations were wrong and I see where
> you're going. But are you sure about total test time?
>
> Average is a very weak statistical indicator, true, unless maybe the system
> is actually very stable, which I've rarely seen on test environments. This
> is why I recommend to everybody I know to use the 90th percentile, or
> better yet the 95th percentile (too bad its not configurable so you could
> get this directly in JMeter). In this case, where you suspect something is
> affecting slightly the results, it makes sense to add up the response times
> of all samples you are interested in and comparing the before and after
> results. Sure - this removes the impact of statistical aberrations on your
> comparisons. I've recently run into a use-case where the expected
> difference would have been theoretically so small (smaller than the
> standard deviation), so comparing the sums made more sense. But the sum,
> just like the average shows sample times which measure system under test
> performance.
>
> But the total runtime of the test? There are factors that don't depend on
> the application that might affect total execution time. Normally, I would
> like to exclude anything is not strictly needed from a benchmark. What if
> you use random timers (gaussian, uniform) or timers to limit or shape the
> throughput in the script configuration? They make a lot of sense to use and
> keep, in a test script and would affect total execution time.
>
> There are tools that monitor the application over time and can show
> detailed response time per methods. If you feel that the results are
> averaged incorrectly, then you should compare response times in JMeter with
> results from such tools.
>
> Overall, I don't see the benefits, but I don't really understand what
> you've noticed. Have you raised a thread when you noticed that behaviour? I
> don't remeber it, but I would like to read it now.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <rw...@gmail.com>
> wrote:
>
> > I think it depends a lot on what you are testing. If you are trying to
> > benchmark system performance, total test duration can be a good indicator
> > (and a quick-glance check) of system performance. For example, my
> > performance benchmarks are configured to run (without ramp up) between 10
> > and 300 threads (depending on the test), in such a way as to guarantee
> that
> > I am exercising the system at near capacity (for each benchmark).
> Because I
> > am running the benchmarks for each release of our system, I have a
> history
> > of the test performance.
> >
> > The total test duration is a good "overall" measure of the performance of
> > any given benchmark. And it is what I used to figure out that JMeter
> wasn't
> > properly reporting the 'average' sample times - until I changed my config
> > to use the following settings:
> >
> >        sampleresult.useNanoTime=false
> >        sampleresult.nanoThreadSleep=0
> >
> > I was seeing the same 'average' times, but the total execution time for a
> > thread group was increasing with each successive new release of code.
> This
> > suggested that something was slowing things down in my code-base. After I
> > made the above config change to JMeter (2.6) I could see that the average
> > sample times were actually much higher than my benchmarks had been
> > recording.
>
>
> > Without being able to explicitly see the execution duration times (or
> > using the average sample times to calculate the test duration), I would
> > have missed the fact that my benchmarks were getting worse.
> >
> > --
> > Robin D. Wilson
> > Sr. Director of Web Development
> > KingsIsle Entertainment, Inc.
> > VOICE: 512-777-1861
> > www.KingsIsle.com
> >
> > -----Original Message-----
> > From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> > Sent: Wednesday, March 21, 2012 6:03 AM
> > To: JMeter Users List
> > Subject: Re: Display total execution time for test plan
> >
> > Hi,
> >
> > I suspect you weren't interested in start of / end of test. But usually
> > this is how you get total test time :).
> >
> > It doesn't make sense to have a test that gradually starts 1000 users and
> > test stops when all finished their planned sessions. Its not even useful
> to
> > measure how long the test took.
> >
> > Why: no live application works likes this in production conditions. At
> the
> > beginning and end of test you have less than 1000 users logged in. What
> if
> > the ramp up of the 1000 threads affect average results, or even total
> > execution time?
> >
> > Check out Sergio's reply. You simulate what users do - true, but at macro
> > level, and you design your test plan in such a manner to respect your
> > requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> > you have such a test, than you check out the statistics from Aggregate
> > Graph, Summary Report + make some nice graphs with some of the cooler
> > things that you monitor. Don't forget CPU, RAM, Network usage on the
> server
> > side. That's what you measure and compare before and after a change.
> >
> > And if a change affects a particular request, focus measurements and
> > reporting on that specific request.
> >
> > Its good to know what one user does, but its better to know what workload
> > your app receives:
> >  - 1000 logged in, unique and active sessions
> >  - 80% make page views in section X
> >  - 10% use the forum (or whatever)
> >  - 1% upload files
> >  - 2% download stuff during their session
> >  ....
> > etc - this is just an example...
> >
> > If you get this right for your particular application, then you need to
> > measure the statistics of the response time: avg, median, 90th line. See
> > how they evolve during the test (this is even better than looking at the
> > values for the entire period) and so on. But all this makes measuring
> total
> > time very irrelevant in 90% of tests or more.
> >
> > Adrian
> >
> > On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it>
> wrote:
> >
> > > Hi Adrian and Eric,
> > >
> > > maybe I'm missing some point, but to me the total duration of the test
> is
> > > rarely important nor predictable.
> > >
> > > If you need it as a baseline, you can use an aggregate result listener,
> > > run some test (maybe with one or two users) and then
> > > you can multiply the number of samples (eventually divided the number
> of
> > > loops executed) by the average execution time.
> > > So you can easily have the net time you need to do a single loop.
> > > This is net of time spent on timers.
> > >
> > > But when you start having 1000 users, you have a lot of parallelizaton,
> > > but obviously not the 100% (that would be ideal).
> > > Also in some case, you have to add the ramp-up time.
> > >
> > > In my experience, we usually end up measuring the behaviour of few key
> > > transactions (e.g. submit the order, or login/logout), under different
> > > situations and loads,
> > > The relationship between average, mean, 90nth % and max return an idea
> of
> > > the way things go.
> > > Note that these transactions are also the longest.
> > >
> > > A static page or an image takes few msec to download, and most of the
> > time
> > > spent is due to the network latency,
> > > which is not something we can easily optimize.
> > >
> > > This is my point of view, feel free to share your thoughts.
> > > best regards
> > >
> > > Sergio
> > >
> > > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> > >
> > >  Hi Adrian,
> > >>
> > >> Thanks for the super quick reply. I'm a bit surprised by your first
> > >> remark though, so maybe I'm having a wrong approach here.
> > >>
> > >> I'm currently developing an application which might have some
> > >> performance issues. Our current target is around 1000 simultaneous
> > >> logged in users, and around 10 concurrent 'clicks'. My current
> > >> approach was to sort of simulate that behavior in JMeter, check how
> > >> long it takes for the simulated users to finish their flows, make some
> > >> adjustments, test again, and check if my simulated users are faster
> > >> than before. Based on this, I need the total execution time, but
> > >> apparently this is not the usual approach, else it would certainly
> > >> have been in there somewhere.
> > >>
> > >> Could you recommend what would be a better way to test my scenario?
> > >> I'm not a performance rock star at all, so I'm very curious what would
> > >> be an effective way in improving the application and using JMeter as
> > >> the load generator in that.
> > >>
> > >> Kind regards,
> > >>
> > >> Erik Pragt
> > >>
> > >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
> > >>  wrote:
> > >>
> > >>> Hi Erik,
> > >>>
> > >>> A very interesting idea.
> > >>>
> > >>> You can find start / stop time in jmeter's log. When running from a
> > >>> console
> > >>> in non-gui mode, you also get some more statistics then in GUI (how
> > long
> > >>> the test ran). You can also schedule a test to run for a certain
> amount
> > >>> of
> > >>> time, or starting / stopping at certain hours (so you don't have to
> > worry
> > >>> about this stuff).
> > >>>
> > >>> If you are interested in response times, however, the sum of all
> > >>> requests,
> > >>> then things get more complicated.
> > >>>
> > >>> Adrian
> > >>>
> > >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
> > >>>  wrote:
> > >>>
> > >>>  Hi all,
> > >>>>
> > >>>> I've created a test plan to put some load on a flow of pages we
> have.
> > >>>> I'm quite new to JMeter, and I have a small question on how to get
> the
> > >>>> information I'm looking for. I've got a working test plan, I can see
> > >>>> the samples, the throughput, etc, but I can't find anywhere what the
> > >>>> time was to execute this testplan, or a single loop of this testplan
> > >>>> when I execute it multiple times.
> > >>>>
> > >>>> Can someone give me a small heads up how I can record and view this
> > >>>> time?
> > >>>>
> > >>>> Kind regards,
> > >>>>
> > >>>> Erik Pragt
> > >>>>
> > >>>> ------------------------------**------------------------------**
> > >>>> ---------
> > >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > >>>> For additional commands, e-mail: user-help@jmeter.apache.org
> > >>>>
> > >>>>
> > >>>>  ------------------------------**------------------------------**
> > >> ---------
> > >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > >> For additional commands, e-mail: user-help@jmeter.apache.org
> > >>
> > >>
> > >
> > > --
> > >
> > > Ing. Sergio Boso
> > >
> > > In caso di erronea ricezione da parte di persona diversa, siete pregati
> > di
> > > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > > archivi e di volercelo comunicare immediatamente restituendoci il
> > messaggio
> > > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> > indirizzosergio@bosoconsulting.it><mailto:
> > > sergioboso@yahoo.it>
> > > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > > indirizzo:sergio@**bosoconsulting.it<
> > indirizzo%3Asergio@bosoconsulting.it><mailto:
> > > sergioboso@yahoo.it>
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> ------------------------------**------------------------------**---------
> > > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > > For additional commands, e-mail: user-help@jmeter.apache.org
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> > For additional commands, e-mail: user-help@jmeter.apache.org
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>


-- 
Cordialement.
Philippe Mouawad.


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Re: Display total execution time for test plan

Posted by Philippe Mouawad <ph...@gmail.com>.
Hello,
This is the original thread:

   - http://www.mail-archive.com/user@jmeter.apache.org/msg01008.html

Regarding the "(*it was summarily closed* when they couldn't reproduce the
same problem)", it was not "summarily closed" !!!, it was closed after
investigation of 3 commiters , see:

   - https://issues.apache.org/bugzilla/show_bug.cgi?id=52189


See Rainer Jung comment and investigation.
And I on my side did profiling on 2 versions and comparison that didn't
show the mentionned difference.
Sebb also did.
Sebb finaly answered that the 2 settings (I gave you) :

   - sampleresult.useNanoTime=false
   - sampleresult.nanoThreadSleep=0

were not an explanation for the problem you mention, although you answered
they had fixed the issue on response time you faced (which is another
issue).

BUT, if you are able to reproduce the slowliness between 2.5.1 (2.6 ?) and
2.4 with a usable Test Plan, then feel free to reopen issue.


Regards
Philippe M.
http://www.ubik-ingenierie.com

On Wed, Mar 21, 2012 at 4:23 PM, Robin D. Wilson <rw...@gmail.com> wrote:

> I did raise a thread on this list when I noticed the behavior. I even
> created a bug for it (it was summarily closed when they couldn't reproduce
> the same problem). I eventually narrowed down the problem to the config
> settings I showed. (I can't remember for sure, but I think you even
> responded to my thread... I know sebb did, but I think I even recall you
> responding too.)
>
> NOTE: this problem is apparent _only_ when running JMeter on WinXP. I did
> not see the same issue on Win7.
>
> All that being said, I completely disagree about all other factors
> affecting the test times - that assumes a complex test, and that I haven't
> accounted for those in my script. My benchmarks are specifically configured
> for a very limited set of test variables, and they are designed to test the
> same thing each time. My test environment is configured so that I can limit
> other variables from influencing my tests. I can virtually guarantee that
> if I run the same test 10 times (or 100 times), the test duration will vary
> by less than 1% on each iteration (and less than 1% between any given test
> runs). If it does vary by more than that, I start looking for problems in
> my test or my code.
>
> As an example, I have tests that test only "login" on my web system. I get
> the home page, POST a login, and get the home page again (after the user
> has logged in). My site is a very high volume web site (millions of pages
> per day). I need to know if I've introduce any delays in the login process
> - because that will adversely affect the end-user experience for my
> customers. On multiple occasions I've identified DB queries that had been
> improperly indexed because of this test. Likewise I have specific test
> cases for 'registrations', 'forgot password', 'forums reply' and many other
> use cases - each one very limited in scope, and each one very specific in
> its test configuration. Each one exercises a very specific portion of the
> system - and each one tells me if my developers have screwed up something
> in the code they've delivered for the 'next' release.
>
> I run each of these test cases for each new version of our system. I get a
> benchmark of the performance of each version. The overall test duration is
> a good 'aggregate' measure of that benchmark. I can guarantee if the test
> duration increases significantly, something is wrong. (And if it goes down
> significantly, something is potentially wrong with the test, or we've done
> a really good job with the code for that release.)
>
> I'll agree, significant changes in the test duration do not tell me
> explicitly what is wrong - but it does flag that I need to look deeper. So
> it is a good bellwether for checking my work.
>
> Please accept that there are ways to test that you might not need - but
> that are still useful to others.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> Sent: Wednesday, March 21, 2012 9:46 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi Robin,
>
> We've all had situations were calculations were wrong and I see where
> you're going. But are you sure about total test time?
>
> Average is a very weak statistical indicator, true, unless maybe the system
> is actually very stable, which I've rarely seen on test environments. This
> is why I recommend to everybody I know to use the 90th percentile, or
> better yet the 95th percentile (too bad its not configurable so you could
> get this directly in JMeter). In this case, where you suspect something is
> affecting slightly the results, it makes sense to add up the response times
> of all samples you are interested in and comparing the before and after
> results. Sure - this removes the impact of statistical aberrations on your
> comparisons. I've recently run into a use-case where the expected
> difference would have been theoretically so small (smaller than the
> standard deviation), so comparing the sums made more sense. But the sum,
> just like the average shows sample times which measure system under test
> performance.
>
> But the total runtime of the test? There are factors that don't depend on
> the application that might affect total execution time. Normally, I would
> like to exclude anything is not strictly needed from a benchmark. What if
> you use random timers (gaussian, uniform) or timers to limit or shape the
> throughput in the script configuration? They make a lot of sense to use and
> keep, in a test script and would affect total execution time.
>
> There are tools that monitor the application over time and can show
> detailed response time per methods. If you feel that the results are
> averaged incorrectly, then you should compare response times in JMeter with
> results from such tools.
>
> Overall, I don't see the benefits, but I don't really understand what
> you've noticed. Have you raised a thread when you noticed that behaviour? I
> don't remeber it, but I would like to read it now.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <rw...@gmail.com>
> wrote:
>
> > I think it depends a lot on what you are testing. If you are trying to
> > benchmark system performance, total test duration can be a good indicator
> > (and a quick-glance check) of system performance. For example, my
> > performance benchmarks are configured to run (without ramp up) between 10
> > and 300 threads (depending on the test), in such a way as to guarantee
> that
> > I am exercising the system at near capacity (for each benchmark).
> Because I
> > am running the benchmarks for each release of our system, I have a
> history
> > of the test performance.
> >
> > The total test duration is a good "overall" measure of the performance of
> > any given benchmark. And it is what I used to figure out that JMeter
> wasn't
> > properly reporting the 'average' sample times - until I changed my config
> > to use the following settings:
> >
> >        sampleresult.useNanoTime=false
> >        sampleresult.nanoThreadSleep=0
> >
> > I was seeing the same 'average' times, but the total execution time for a
> > thread group was increasing with each successive new release of code.
> This
> > suggested that something was slowing things down in my code-base. After I
> > made the above config change to JMeter (2.6) I could see that the average
> > sample times were actually much higher than my benchmarks had been
> > recording.
>
>
> > Without being able to explicitly see the execution duration times (or
> > using the average sample times to calculate the test duration), I would
> > have missed the fact that my benchmarks were getting worse.
> >
> > --
> > Robin D. Wilson
> > Sr. Director of Web Development
> > KingsIsle Entertainment, Inc.
> > VOICE: 512-777-1861
> > www.KingsIsle.com
> >
> > -----Original Message-----
> > From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> > Sent: Wednesday, March 21, 2012 6:03 AM
> > To: JMeter Users List
> > Subject: Re: Display total execution time for test plan
> >
> > Hi,
> >
> > I suspect you weren't interested in start of / end of test. But usually
> > this is how you get total test time :).
> >
> > It doesn't make sense to have a test that gradually starts 1000 users and
> > test stops when all finished their planned sessions. Its not even useful
> to
> > measure how long the test took.
> >
> > Why: no live application works likes this in production conditions. At
> the
> > beginning and end of test you have less than 1000 users logged in. What
> if
> > the ramp up of the 1000 threads affect average results, or even total
> > execution time?
> >
> > Check out Sergio's reply. You simulate what users do - true, but at macro
> > level, and you design your test plan in such a manner to respect your
> > requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> > you have such a test, than you check out the statistics from Aggregate
> > Graph, Summary Report + make some nice graphs with some of the cooler
> > things that you monitor. Don't forget CPU, RAM, Network usage on the
> server
> > side. That's what you measure and compare before and after a change.
> >
> > And if a change affects a particular request, focus measurements and
> > reporting on that specific request.
> >
> > Its good to know what one user does, but its better to know what workload
> > your app receives:
> >  - 1000 logged in, unique and active sessions
> >  - 80% make page views in section X
> >  - 10% use the forum (or whatever)
> >  - 1% upload files
> >  - 2% download stuff during their session
> >  ....
> > etc - this is just an example...
> >
> > If you get this right for your particular application, then you need to
> > measure the statistics of the response time: avg, median, 90th line. See
> > how they evolve during the test (this is even better than looking at the
> > values for the entire period) and so on. But all this makes measuring
> total
> > time very irrelevant in 90% of tests or more.
> >
> > Adrian
> >
> > On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it>
> wrote:
> >
> > > Hi Adrian and Eric,
> > >
> > > maybe I'm missing some point, but to me the total duration of the test
> is
> > > rarely important nor predictable.
> > >
> > > If you need it as a baseline, you can use an aggregate result listener,
> > > run some test (maybe with one or two users) and then
> > > you can multiply the number of samples (eventually divided the number
> of
> > > loops executed) by the average execution time.
> > > So you can easily have the net time you need to do a single loop.
> > > This is net of time spent on timers.
> > >
> > > But when you start having 1000 users, you have a lot of parallelizaton,
> > > but obviously not the 100% (that would be ideal).
> > > Also in some case, you have to add the ramp-up time.
> > >
> > > In my experience, we usually end up measuring the behaviour of few key
> > > transactions (e.g. submit the order, or login/logout), under different
> > > situations and loads,
> > > The relationship between average, mean, 90nth % and max return an idea
> of
> > > the way things go.
> > > Note that these transactions are also the longest.
> > >
> > > A static page or an image takes few msec to download, and most of the
> > time
> > > spent is due to the network latency,
> > > which is not something we can easily optimize.
> > >
> > > This is my point of view, feel free to share your thoughts.
> > > best regards
> > >
> > > Sergio
> > >
> > > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> > >
> > >  Hi Adrian,
> > >>
> > >> Thanks for the super quick reply. I'm a bit surprised by your first
> > >> remark though, so maybe I'm having a wrong approach here.
> > >>
> > >> I'm currently developing an application which might have some
> > >> performance issues. Our current target is around 1000 simultaneous
> > >> logged in users, and around 10 concurrent 'clicks'. My current
> > >> approach was to sort of simulate that behavior in JMeter, check how
> > >> long it takes for the simulated users to finish their flows, make some
> > >> adjustments, test again, and check if my simulated users are faster
> > >> than before. Based on this, I need the total execution time, but
> > >> apparently this is not the usual approach, else it would certainly
> > >> have been in there somewhere.
> > >>
> > >> Could you recommend what would be a better way to test my scenario?
> > >> I'm not a performance rock star at all, so I'm very curious what would
> > >> be an effective way in improving the application and using JMeter as
> > >> the load generator in that.
> > >>
> > >> Kind regards,
> > >>
> > >> Erik Pragt
> > >>
> > >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
> > >>  wrote:
> > >>
> > >>> Hi Erik,
> > >>>
> > >>> A very interesting idea.
> > >>>
> > >>> You can find start / stop time in jmeter's log. When running from a
> > >>> console
> > >>> in non-gui mode, you also get some more statistics then in GUI (how
> > long
> > >>> the test ran). You can also schedule a test to run for a certain
> amount
> > >>> of
> > >>> time, or starting / stopping at certain hours (so you don't have to
> > worry
> > >>> about this stuff).
> > >>>
> > >>> If you are interested in response times, however, the sum of all
> > >>> requests,
> > >>> then things get more complicated.
> > >>>
> > >>> Adrian
> > >>>
> > >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
> > >>>  wrote:
> > >>>
> > >>>  Hi all,
> > >>>>
> > >>>> I've created a test plan to put some load on a flow of pages we
> have.
> > >>>> I'm quite new to JMeter, and I have a small question on how to get
> the
> > >>>> information I'm looking for. I've got a working test plan, I can see
> > >>>> the samples, the throughput, etc, but I can't find anywhere what the
> > >>>> time was to execute this testplan, or a single loop of this testplan
> > >>>> when I execute it multiple times.
> > >>>>
> > >>>> Can someone give me a small heads up how I can record and view this
> > >>>> time?
> > >>>>
> > >>>> Kind regards,
> > >>>>
> > >>>> Erik Pragt
> > >>>>
> > >>>> ------------------------------**------------------------------**
> > >>>> ---------
> > >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > >>>> For additional commands, e-mail: user-help@jmeter.apache.org
> > >>>>
> > >>>>
> > >>>>  ------------------------------**------------------------------**
> > >> ---------
> > >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > >> For additional commands, e-mail: user-help@jmeter.apache.org
> > >>
> > >>
> > >
> > > --
> > >
> > > Ing. Sergio Boso
> > >
> > > In caso di erronea ricezione da parte di persona diversa, siete pregati
> > di
> > > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > > archivi e di volercelo comunicare immediatamente restituendoci il
> > messaggio
> > > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> > indirizzosergio@bosoconsulting.it><mailto:
> > > sergioboso@yahoo.it>
> > > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > > indirizzo:sergio@**bosoconsulting.it<
> > indirizzo%3Asergio@bosoconsulting.it><mailto:
> > > sergioboso@yahoo.it>
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> ------------------------------**------------------------------**---------
> > > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> > user-unsubscribe@jmeter.apache.org>
> > > For additional commands, e-mail: user-help@jmeter.apache.org
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> > For additional commands, e-mail: user-help@jmeter.apache.org
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>


-- 
Cordialement.
Philippe Mouawad.

RE: Display total execution time for test plan

Posted by "Robin D. Wilson" <rw...@gmail.com>.
I did raise a thread on this list when I noticed the behavior. I even created a bug for it (it was summarily closed when they couldn't reproduce the same problem). I eventually narrowed down the problem to the config settings I showed. (I can't remember for sure, but I think you even responded to my thread... I know sebb did, but I think I even recall you responding too.)

NOTE: this problem is apparent _only_ when running JMeter on WinXP. I did not see the same issue on Win7.

All that being said, I completely disagree about all other factors affecting the test times - that assumes a complex test, and that I haven't accounted for those in my script. My benchmarks are specifically configured for a very limited set of test variables, and they are designed to test the same thing each time. My test environment is configured so that I can limit other variables from influencing my tests. I can virtually guarantee that if I run the same test 10 times (or 100 times), the test duration will vary by less than 1% on each iteration (and less than 1% between any given test runs). If it does vary by more than that, I start looking for problems in my test or my code.

As an example, I have tests that test only "login" on my web system. I get the home page, POST a login, and get the home page again (after the user has logged in). My site is a very high volume web site (millions of pages per day). I need to know if I've introduce any delays in the login process - because that will adversely affect the end-user experience for my customers. On multiple occasions I've identified DB queries that had been improperly indexed because of this test. Likewise I have specific test cases for 'registrations', 'forgot password', 'forums reply' and many other use cases - each one very limited in scope, and each one very specific in its test configuration. Each one exercises a very specific portion of the system - and each one tells me if my developers have screwed up something in the code they've delivered for the 'next' release.

I run each of these test cases for each new version of our system. I get a benchmark of the performance of each version. The overall test duration is a good 'aggregate' measure of that benchmark. I can guarantee if the test duration increases significantly, something is wrong. (And if it goes down significantly, something is potentially wrong with the test, or we've done a really good job with the code for that release.)

I'll agree, significant changes in the test duration do not tell me explicitly what is wrong - but it does flag that I need to look deeper. So it is a good bellwether for checking my work.

Please accept that there are ways to test that you might not need - but that are still useful to others.

--
Robin D. Wilson
Sr. Director of Web Development
KingsIsle Entertainment, Inc.
VOICE: 512-777-1861
www.KingsIsle.com


-----Original Message-----
From: Adrian Speteanu [mailto:asp.adieu@gmail.com] 
Sent: Wednesday, March 21, 2012 9:46 AM
To: JMeter Users List
Subject: Re: Display total execution time for test plan

Hi Robin,

We've all had situations were calculations were wrong and I see where
you're going. But are you sure about total test time?

Average is a very weak statistical indicator, true, unless maybe the system
is actually very stable, which I've rarely seen on test environments. This
is why I recommend to everybody I know to use the 90th percentile, or
better yet the 95th percentile (too bad its not configurable so you could
get this directly in JMeter). In this case, where you suspect something is
affecting slightly the results, it makes sense to add up the response times
of all samples you are interested in and comparing the before and after
results. Sure - this removes the impact of statistical aberrations on your
comparisons. I've recently run into a use-case where the expected
difference would have been theoretically so small (smaller than the
standard deviation), so comparing the sums made more sense. But the sum,
just like the average shows sample times which measure system under test
performance.

But the total runtime of the test? There are factors that don't depend on
the application that might affect total execution time. Normally, I would
like to exclude anything is not strictly needed from a benchmark. What if
you use random timers (gaussian, uniform) or timers to limit or shape the
throughput in the script configuration? They make a lot of sense to use and
keep, in a test script and would affect total execution time.

There are tools that monitor the application over time and can show
detailed response time per methods. If you feel that the results are
averaged incorrectly, then you should compare response times in JMeter with
results from such tools.

Overall, I don't see the benefits, but I don't really understand what
you've noticed. Have you raised a thread when you noticed that behaviour? I
don't remeber it, but I would like to read it now.

Adrian

On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <rw...@gmail.com> wrote:

> I think it depends a lot on what you are testing. If you are trying to
> benchmark system performance, total test duration can be a good indicator
> (and a quick-glance check) of system performance. For example, my
> performance benchmarks are configured to run (without ramp up) between 10
> and 300 threads (depending on the test), in such a way as to guarantee that
> I am exercising the system at near capacity (for each benchmark). Because I
> am running the benchmarks for each release of our system, I have a history
> of the test performance.
>
> The total test duration is a good "overall" measure of the performance of
> any given benchmark. And it is what I used to figure out that JMeter wasn't
> properly reporting the 'average' sample times - until I changed my config
> to use the following settings:
>
>        sampleresult.useNanoTime=false
>        sampleresult.nanoThreadSleep=0
>
> I was seeing the same 'average' times, but the total execution time for a
> thread group was increasing with each successive new release of code. This
> suggested that something was slowing things down in my code-base. After I
> made the above config change to JMeter (2.6) I could see that the average
> sample times were actually much higher than my benchmarks had been
> recording.


> Without being able to explicitly see the execution duration times (or
> using the average sample times to calculate the test duration), I would
> have missed the fact that my benchmarks were getting worse.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> Sent: Wednesday, March 21, 2012 6:03 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi,
>
> I suspect you weren't interested in start of / end of test. But usually
> this is how you get total test time :).
>
> It doesn't make sense to have a test that gradually starts 1000 users and
> test stops when all finished their planned sessions. Its not even useful to
> measure how long the test took.
>
> Why: no live application works likes this in production conditions. At the
> beginning and end of test you have less than 1000 users logged in. What if
> the ramp up of the 1000 threads affect average results, or even total
> execution time?
>
> Check out Sergio's reply. You simulate what users do - true, but at macro
> level, and you design your test plan in such a manner to respect your
> requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> you have such a test, than you check out the statistics from Aggregate
> Graph, Summary Report + make some nice graphs with some of the cooler
> things that you monitor. Don't forget CPU, RAM, Network usage on the server
> side. That's what you measure and compare before and after a change.
>
> And if a change affects a particular request, focus measurements and
> reporting on that specific request.
>
> Its good to know what one user does, but its better to know what workload
> your app receives:
>  - 1000 logged in, unique and active sessions
>  - 80% make page views in section X
>  - 10% use the forum (or whatever)
>  - 1% upload files
>  - 2% download stuff during their session
>  ....
> etc - this is just an example...
>
> If you get this right for your particular application, then you need to
> measure the statistics of the response time: avg, median, 90th line. See
> how they evolve during the test (this is even better than looking at the
> values for the entire period) and so on. But all this makes measuring total
> time very irrelevant in 90% of tests or more.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it> wrote:
>
> > Hi Adrian and Eric,
> >
> > maybe I'm missing some point, but to me the total duration of the test is
> > rarely important nor predictable.
> >
> > If you need it as a baseline, you can use an aggregate result listener,
> > run some test (maybe with one or two users) and then
> > you can multiply the number of samples (eventually divided the number of
> > loops executed) by the average execution time.
> > So you can easily have the net time you need to do a single loop.
> > This is net of time spent on timers.
> >
> > But when you start having 1000 users, you have a lot of parallelizaton,
> > but obviously not the 100% (that would be ideal).
> > Also in some case, you have to add the ramp-up time.
> >
> > In my experience, we usually end up measuring the behaviour of few key
> > transactions (e.g. submit the order, or login/logout), under different
> > situations and loads,
> > The relationship between average, mean, 90nth % and max return an idea of
> > the way things go.
> > Note that these transactions are also the longest.
> >
> > A static page or an image takes few msec to download, and most of the
> time
> > spent is due to the network latency,
> > which is not something we can easily optimize.
> >
> > This is my point of view, feel free to share your thoughts.
> > best regards
> >
> > Sergio
> >
> > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> >
> >  Hi Adrian,
> >>
> >> Thanks for the super quick reply. I'm a bit surprised by your first
> >> remark though, so maybe I'm having a wrong approach here.
> >>
> >> I'm currently developing an application which might have some
> >> performance issues. Our current target is around 1000 simultaneous
> >> logged in users, and around 10 concurrent 'clicks'. My current
> >> approach was to sort of simulate that behavior in JMeter, check how
> >> long it takes for the simulated users to finish their flows, make some
> >> adjustments, test again, and check if my simulated users are faster
> >> than before. Based on this, I need the total execution time, but
> >> apparently this is not the usual approach, else it would certainly
> >> have been in there somewhere.
> >>
> >> Could you recommend what would be a better way to test my scenario?
> >> I'm not a performance rock star at all, so I'm very curious what would
> >> be an effective way in improving the application and using JMeter as
> >> the load generator in that.
> >>
> >> Kind regards,
> >>
> >> Erik Pragt
> >>
> >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
> >>  wrote:
> >>
> >>> Hi Erik,
> >>>
> >>> A very interesting idea.
> >>>
> >>> You can find start / stop time in jmeter's log. When running from a
> >>> console
> >>> in non-gui mode, you also get some more statistics then in GUI (how
> long
> >>> the test ran). You can also schedule a test to run for a certain amount
> >>> of
> >>> time, or starting / stopping at certain hours (so you don't have to
> worry
> >>> about this stuff).
> >>>
> >>> If you are interested in response times, however, the sum of all
> >>> requests,
> >>> then things get more complicated.
> >>>
> >>> Adrian
> >>>
> >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
> >>>  wrote:
> >>>
> >>>  Hi all,
> >>>>
> >>>> I've created a test plan to put some load on a flow of pages we have.
> >>>> I'm quite new to JMeter, and I have a small question on how to get the
> >>>> information I'm looking for. I've got a working test plan, I can see
> >>>> the samples, the throughput, etc, but I can't find anywhere what the
> >>>> time was to execute this testplan, or a single loop of this testplan
> >>>> when I execute it multiple times.
> >>>>
> >>>> Can someone give me a small heads up how I can record and view this
> >>>> time?
> >>>>
> >>>> Kind regards,
> >>>>
> >>>> Erik Pragt
> >>>>
> >>>> ------------------------------**------------------------------**
> >>>> ---------
> >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> >>>> For additional commands, e-mail: user-help@jmeter.apache.org
> >>>>
> >>>>
> >>>>  ------------------------------**------------------------------**
> >> ---------
> >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> >> For additional commands, e-mail: user-help@jmeter.apache.org
> >>
> >>
> >
> > --
> >
> > Ing. Sergio Boso
> >
> > In caso di erronea ricezione da parte di persona diversa, siete pregati
> di
> > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > archivi e di volercelo comunicare immediatamente restituendoci il
> messaggio
> > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> indirizzosergio@bosoconsulting.it><mailto:
> > sergioboso@yahoo.it>
> > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > indirizzo:sergio@**bosoconsulting.it<
> indirizzo%3Asergio@bosoconsulting.it><mailto:
> > sergioboso@yahoo.it>
> >
> >
> >
> >
> >
> >
> > ------------------------------**------------------------------**---------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> > For additional commands, e-mail: user-help@jmeter.apache.org
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Re: Display total execution time for test plan

Posted by Adrian Speteanu <as...@gmail.com>.
Hi Robin,

We've all had situations were calculations were wrong and I see where
you're going. But are you sure about total test time?

Average is a very weak statistical indicator, true, unless maybe the system
is actually very stable, which I've rarely seen on test environments. This
is why I recommend to everybody I know to use the 90th percentile, or
better yet the 95th percentile (too bad its not configurable so you could
get this directly in JMeter). In this case, where you suspect something is
affecting slightly the results, it makes sense to add up the response times
of all samples you are interested in and comparing the before and after
results. Sure - this removes the impact of statistical aberrations on your
comparisons. I've recently run into a use-case where the expected
difference would have been theoretically so small (smaller than the
standard deviation), so comparing the sums made more sense. But the sum,
just like the average shows sample times which measure system under test
performance.

But the total runtime of the test? There are factors that don't depend on
the application that might affect total execution time. Normally, I would
like to exclude anything is not strictly needed from a benchmark. What if
you use random timers (gaussian, uniform) or timers to limit or shape the
throughput in the script configuration? They make a lot of sense to use and
keep, in a test script and would affect total execution time.

There are tools that monitor the application over time and can show
detailed response time per methods. If you feel that the results are
averaged incorrectly, then you should compare response times in JMeter with
results from such tools.

Overall, I don't see the benefits, but I don't really understand what
you've noticed. Have you raised a thread when you noticed that behaviour? I
don't remeber it, but I would like to read it now.

Adrian

On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <rw...@gmail.com> wrote:

> I think it depends a lot on what you are testing. If you are trying to
> benchmark system performance, total test duration can be a good indicator
> (and a quick-glance check) of system performance. For example, my
> performance benchmarks are configured to run (without ramp up) between 10
> and 300 threads (depending on the test), in such a way as to guarantee that
> I am exercising the system at near capacity (for each benchmark). Because I
> am running the benchmarks for each release of our system, I have a history
> of the test performance.
>
> The total test duration is a good "overall" measure of the performance of
> any given benchmark. And it is what I used to figure out that JMeter wasn't
> properly reporting the 'average' sample times - until I changed my config
> to use the following settings:
>
>        sampleresult.useNanoTime=false
>        sampleresult.nanoThreadSleep=0
>
> I was seeing the same 'average' times, but the total execution time for a
> thread group was increasing with each successive new release of code. This
> suggested that something was slowing things down in my code-base. After I
> made the above config change to JMeter (2.6) I could see that the average
> sample times were actually much higher than my benchmarks had been
> recording.


> Without being able to explicitly see the execution duration times (or
> using the average sample times to calculate the test duration), I would
> have missed the fact that my benchmarks were getting worse.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:asp.adieu@gmail.com]
> Sent: Wednesday, March 21, 2012 6:03 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi,
>
> I suspect you weren't interested in start of / end of test. But usually
> this is how you get total test time :).
>
> It doesn't make sense to have a test that gradually starts 1000 users and
> test stops when all finished their planned sessions. Its not even useful to
> measure how long the test took.
>
> Why: no live application works likes this in production conditions. At the
> beginning and end of test you have less than 1000 users logged in. What if
> the ramp up of the 1000 threads affect average results, or even total
> execution time?
>
> Check out Sergio's reply. You simulate what users do - true, but at macro
> level, and you design your test plan in such a manner to respect your
> requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> you have such a test, than you check out the statistics from Aggregate
> Graph, Summary Report + make some nice graphs with some of the cooler
> things that you monitor. Don't forget CPU, RAM, Network usage on the server
> side. That's what you measure and compare before and after a change.
>
> And if a change affects a particular request, focus measurements and
> reporting on that specific request.
>
> Its good to know what one user does, but its better to know what workload
> your app receives:
>  - 1000 logged in, unique and active sessions
>  - 80% make page views in section X
>  - 10% use the forum (or whatever)
>  - 1% upload files
>  - 2% download stuff during their session
>  ....
> etc - this is just an example...
>
> If you get this right for your particular application, then you need to
> measure the statistics of the response time: avg, median, 90th line. See
> how they evolve during the test (this is even better than looking at the
> values for the entire period) and so on. But all this makes measuring total
> time very irrelevant in 90% of tests or more.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it> wrote:
>
> > Hi Adrian and Eric,
> >
> > maybe I'm missing some point, but to me the total duration of the test is
> > rarely important nor predictable.
> >
> > If you need it as a baseline, you can use an aggregate result listener,
> > run some test (maybe with one or two users) and then
> > you can multiply the number of samples (eventually divided the number of
> > loops executed) by the average execution time.
> > So you can easily have the net time you need to do a single loop.
> > This is net of time spent on timers.
> >
> > But when you start having 1000 users, you have a lot of parallelizaton,
> > but obviously not the 100% (that would be ideal).
> > Also in some case, you have to add the ramp-up time.
> >
> > In my experience, we usually end up measuring the behaviour of few key
> > transactions (e.g. submit the order, or login/logout), under different
> > situations and loads,
> > The relationship between average, mean, 90nth % and max return an idea of
> > the way things go.
> > Note that these transactions are also the longest.
> >
> > A static page or an image takes few msec to download, and most of the
> time
> > spent is due to the network latency,
> > which is not something we can easily optimize.
> >
> > This is my point of view, feel free to share your thoughts.
> > best regards
> >
> > Sergio
> >
> > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> >
> >  Hi Adrian,
> >>
> >> Thanks for the super quick reply. I'm a bit surprised by your first
> >> remark though, so maybe I'm having a wrong approach here.
> >>
> >> I'm currently developing an application which might have some
> >> performance issues. Our current target is around 1000 simultaneous
> >> logged in users, and around 10 concurrent 'clicks'. My current
> >> approach was to sort of simulate that behavior in JMeter, check how
> >> long it takes for the simulated users to finish their flows, make some
> >> adjustments, test again, and check if my simulated users are faster
> >> than before. Based on this, I need the total execution time, but
> >> apparently this is not the usual approach, else it would certainly
> >> have been in there somewhere.
> >>
> >> Could you recommend what would be a better way to test my scenario?
> >> I'm not a performance rock star at all, so I'm very curious what would
> >> be an effective way in improving the application and using JMeter as
> >> the load generator in that.
> >>
> >> Kind regards,
> >>
> >> Erik Pragt
> >>
> >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
> >>  wrote:
> >>
> >>> Hi Erik,
> >>>
> >>> A very interesting idea.
> >>>
> >>> You can find start / stop time in jmeter's log. When running from a
> >>> console
> >>> in non-gui mode, you also get some more statistics then in GUI (how
> long
> >>> the test ran). You can also schedule a test to run for a certain amount
> >>> of
> >>> time, or starting / stopping at certain hours (so you don't have to
> worry
> >>> about this stuff).
> >>>
> >>> If you are interested in response times, however, the sum of all
> >>> requests,
> >>> then things get more complicated.
> >>>
> >>> Adrian
> >>>
> >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
> >>>  wrote:
> >>>
> >>>  Hi all,
> >>>>
> >>>> I've created a test plan to put some load on a flow of pages we have.
> >>>> I'm quite new to JMeter, and I have a small question on how to get the
> >>>> information I'm looking for. I've got a working test plan, I can see
> >>>> the samples, the throughput, etc, but I can't find anywhere what the
> >>>> time was to execute this testplan, or a single loop of this testplan
> >>>> when I execute it multiple times.
> >>>>
> >>>> Can someone give me a small heads up how I can record and view this
> >>>> time?
> >>>>
> >>>> Kind regards,
> >>>>
> >>>> Erik Pragt
> >>>>
> >>>> ------------------------------**------------------------------**
> >>>> ---------
> >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> >>>> For additional commands, e-mail: user-help@jmeter.apache.org
> >>>>
> >>>>
> >>>>  ------------------------------**------------------------------**
> >> ---------
> >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> >> For additional commands, e-mail: user-help@jmeter.apache.org
> >>
> >>
> >
> > --
> >
> > Ing. Sergio Boso
> >
> > In caso di erronea ricezione da parte di persona diversa, siete pregati
> di
> > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > archivi e di volercelo comunicare immediatamente restituendoci il
> messaggio
> > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> indirizzosergio@bosoconsulting.it><mailto:
> > sergioboso@yahoo.it>
> > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > indirizzo:sergio@**bosoconsulting.it<
> indirizzo%3Asergio@bosoconsulting.it><mailto:
> > sergioboso@yahoo.it>
> >
> >
> >
> >
> >
> >
> > ------------------------------**------------------------------**---------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscribe@jmeter.apache.org>
> > For additional commands, e-mail: user-help@jmeter.apache.org
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>

RE: Display total execution time for test plan

Posted by "Robin D. Wilson" <rw...@gmail.com>.
I think it depends a lot on what you are testing. If you are trying to benchmark system performance, total test duration can be a good indicator (and a quick-glance check) of system performance. For example, my performance benchmarks are configured to run (without ramp up) between 10 and 300 threads (depending on the test), in such a way as to guarantee that I am exercising the system at near capacity (for each benchmark). Because I am running the benchmarks for each release of our system, I have a history of the test performance.

The total test duration is a good "overall" measure of the performance of any given benchmark. And it is what I used to figure out that JMeter wasn't properly reporting the 'average' sample times - until I changed my config to use the following settings:

	sampleresult.useNanoTime=false
	sampleresult.nanoThreadSleep=0

I was seeing the same 'average' times, but the total execution time for a thread group was increasing with each successive new release of code. This suggested that something was slowing things down in my code-base. After I made the above config change to JMeter (2.6) I could see that the average sample times were actually much higher than my benchmarks had been recording.

Without being able to explicitly see the execution duration times (or using the average sample times to calculate the test duration), I would have missed the fact that my benchmarks were getting worse.

--
Robin D. Wilson
Sr. Director of Web Development
KingsIsle Entertainment, Inc.
VOICE: 512-777-1861
www.KingsIsle.com

-----Original Message-----
From: Adrian Speteanu [mailto:asp.adieu@gmail.com] 
Sent: Wednesday, March 21, 2012 6:03 AM
To: JMeter Users List
Subject: Re: Display total execution time for test plan

Hi,

I suspect you weren't interested in start of / end of test. But usually
this is how you get total test time :).

It doesn't make sense to have a test that gradually starts 1000 users and
test stops when all finished their planned sessions. Its not even useful to
measure how long the test took.

Why: no live application works likes this in production conditions. At the
beginning and end of test you have less than 1000 users logged in. What if
the ramp up of the 1000 threads affect average results, or even total
execution time?

Check out Sergio's reply. You simulate what users do - true, but at macro
level, and you design your test plan in such a manner to respect your
requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
you have such a test, than you check out the statistics from Aggregate
Graph, Summary Report + make some nice graphs with some of the cooler
things that you monitor. Don't forget CPU, RAM, Network usage on the server
side. That's what you measure and compare before and after a change.

And if a change affects a particular request, focus measurements and
reporting on that specific request.

Its good to know what one user does, but its better to know what workload
your app receives:
  - 1000 logged in, unique and active sessions
  - 80% make page views in section X
  - 10% use the forum (or whatever)
  - 1% upload files
  - 2% download stuff during their session
 ....
etc - this is just an example...

If you get this right for your particular application, then you need to
measure the statistics of the response time: avg, median, 90th line. See
how they evolve during the test (this is even better than looking at the
values for the entire period) and so on. But all this makes measuring total
time very irrelevant in 90% of tests or more.

Adrian

On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it> wrote:

> Hi Adrian and Eric,
>
> maybe I'm missing some point, but to me the total duration of the test is
> rarely important nor predictable.
>
> If you need it as a baseline, you can use an aggregate result listener,
> run some test (maybe with one or two users) and then
> you can multiply the number of samples (eventually divided the number of
> loops executed) by the average execution time.
> So you can easily have the net time you need to do a single loop.
> This is net of time spent on timers.
>
> But when you start having 1000 users, you have a lot of parallelizaton,
> but obviously not the 100% (that would be ideal).
> Also in some case, you have to add the ramp-up time.
>
> In my experience, we usually end up measuring the behaviour of few key
> transactions (e.g. submit the order, or login/logout), under different
> situations and loads,
> The relationship between average, mean, 90nth % and max return an idea of
> the way things go.
> Note that these transactions are also the longest.
>
> A static page or an image takes few msec to download, and most of the time
> spent is due to the network latency,
> which is not something we can easily optimize.
>
> This is my point of view, feel free to share your thoughts.
> best regards
>
> Sergio
>
> Il 20/03/2012 17:55, Erik Pragt ha scritto:
>
>  Hi Adrian,
>>
>> Thanks for the super quick reply. I'm a bit surprised by your first
>> remark though, so maybe I'm having a wrong approach here.
>>
>> I'm currently developing an application which might have some
>> performance issues. Our current target is around 1000 simultaneous
>> logged in users, and around 10 concurrent 'clicks'. My current
>> approach was to sort of simulate that behavior in JMeter, check how
>> long it takes for the simulated users to finish their flows, make some
>> adjustments, test again, and check if my simulated users are faster
>> than before. Based on this, I need the total execution time, but
>> apparently this is not the usual approach, else it would certainly
>> have been in there somewhere.
>>
>> Could you recommend what would be a better way to test my scenario?
>> I'm not a performance rock star at all, so I'm very curious what would
>> be an effective way in improving the application and using JMeter as
>> the load generator in that.
>>
>> Kind regards,
>>
>> Erik Pragt
>>
>> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
>>  wrote:
>>
>>> Hi Erik,
>>>
>>> A very interesting idea.
>>>
>>> You can find start / stop time in jmeter's log. When running from a
>>> console
>>> in non-gui mode, you also get some more statistics then in GUI (how long
>>> the test ran). You can also schedule a test to run for a certain amount
>>> of
>>> time, or starting / stopping at certain hours (so you don't have to worry
>>> about this stuff).
>>>
>>> If you are interested in response times, however, the sum of all
>>> requests,
>>> then things get more complicated.
>>>
>>> Adrian
>>>
>>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
>>>  wrote:
>>>
>>>  Hi all,
>>>>
>>>> I've created a test plan to put some load on a flow of pages we have.
>>>> I'm quite new to JMeter, and I have a small question on how to get the
>>>> information I'm looking for. I've got a working test plan, I can see
>>>> the samples, the throughput, etc, but I can't find anywhere what the
>>>> time was to execute this testplan, or a single loop of this testplan
>>>> when I execute it multiple times.
>>>>
>>>> Can someone give me a small heads up how I can record and view this
>>>> time?
>>>>
>>>> Kind regards,
>>>>
>>>> Erik Pragt
>>>>
>>>> ------------------------------**------------------------------**
>>>> ---------
>>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
>>>> For additional commands, e-mail: user-help@jmeter.apache.org
>>>>
>>>>
>>>>  ------------------------------**------------------------------**
>> ---------
>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
>> For additional commands, e-mail: user-help@jmeter.apache.org
>>
>>
>
> --
>
> Ing. Sergio Boso
>
> In caso di erronea ricezione da parte di persona diversa, siete pregati di
> eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> archivi e di volercelo comunicare immediatamente restituendoci il messaggio
> via e-mail al seguente indirizzosergio@**bosoconsulting.it<in...@bosoconsulting.it><mailto:
> sergioboso@yahoo.it>
> L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> indirizzo:sergio@**bosoconsulting.it<in...@bosoconsulting.it><mailto:
> sergioboso@yahoo.it>
>
>
>
>
>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Re: Display total execution time for test plan

Posted by Adrian Speteanu <as...@gmail.com>.
Hi,

I suspect you weren't interested in start of / end of test. But usually
this is how you get total test time :).

It doesn't make sense to have a test that gradually starts 1000 users and
test stops when all finished their planned sessions. Its not even useful to
measure how long the test took.

Why: no live application works likes this in production conditions. At the
beginning and end of test you have less than 1000 users logged in. What if
the ramp up of the 1000 threads affect average results, or even total
execution time?

Check out Sergio's reply. You simulate what users do - true, but at macro
level, and you design your test plan in such a manner to respect your
requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
you have such a test, than you check out the statistics from Aggregate
Graph, Summary Report + make some nice graphs with some of the cooler
things that you monitor. Don't forget CPU, RAM, Network usage on the server
side. That's what you measure and compare before and after a change.

And if a change affects a particular request, focus measurements and
reporting on that specific request.

Its good to know what one user does, but its better to know what workload
your app receives:
  - 1000 logged in, unique and active sessions
  - 80% make page views in section X
  - 10% use the forum (or whatever)
  - 1% upload files
  - 2% download stuff during their session
 ....
etc - this is just an example...

If you get this right for your particular application, then you need to
measure the statistics of the response time: avg, median, 90th line. See
how they evolve during the test (this is even better than looking at the
values for the entire period) and so on. But all this makes measuring total
time very irrelevant in 90% of tests or more.

Adrian

On Wed, Mar 21, 2012 at 12:06 AM, sergio <se...@bosoconsulting.it> wrote:

> Hi Adrian and Eric,
>
> maybe I'm missing some point, but to me the total duration of the test is
> rarely important nor predictable.
>
> If you need it as a baseline, you can use an aggregate result listener,
> run some test (maybe with one or two users) and then
> you can multiply the number of samples (eventually divided the number of
> loops executed) by the average execution time.
> So you can easily have the net time you need to do a single loop.
> This is net of time spent on timers.
>
> But when you start having 1000 users, you have a lot of parallelizaton,
> but obviously not the 100% (that would be ideal).
> Also in some case, you have to add the ramp-up time.
>
> In my experience, we usually end up measuring the behaviour of few key
> transactions (e.g. submit the order, or login/logout), under different
> situations and loads,
> The relationship between average, mean, 90nth % and max return an idea of
> the way things go.
> Note that these transactions are also the longest.
>
> A static page or an image takes few msec to download, and most of the time
> spent is due to the network latency,
> which is not something we can easily optimize.
>
> This is my point of view, feel free to share your thoughts.
> best regards
>
> Sergio
>
> Il 20/03/2012 17:55, Erik Pragt ha scritto:
>
>  Hi Adrian,
>>
>> Thanks for the super quick reply. I'm a bit surprised by your first
>> remark though, so maybe I'm having a wrong approach here.
>>
>> I'm currently developing an application which might have some
>> performance issues. Our current target is around 1000 simultaneous
>> logged in users, and around 10 concurrent 'clicks'. My current
>> approach was to sort of simulate that behavior in JMeter, check how
>> long it takes for the simulated users to finish their flows, make some
>> adjustments, test again, and check if my simulated users are faster
>> than before. Based on this, I need the total execution time, but
>> apparently this is not the usual approach, else it would certainly
>> have been in there somewhere.
>>
>> Could you recommend what would be a better way to test my scenario?
>> I'm not a performance rock star at all, so I'm very curious what would
>> be an effective way in improving the application and using JMeter as
>> the load generator in that.
>>
>> Kind regards,
>>
>> Erik Pragt
>>
>> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>
>>  wrote:
>>
>>> Hi Erik,
>>>
>>> A very interesting idea.
>>>
>>> You can find start / stop time in jmeter's log. When running from a
>>> console
>>> in non-gui mode, you also get some more statistics then in GUI (how long
>>> the test ran). You can also schedule a test to run for a certain amount
>>> of
>>> time, or starting / stopping at certain hours (so you don't have to worry
>>> about this stuff).
>>>
>>> If you are interested in response times, however, the sum of all
>>> requests,
>>> then things get more complicated.
>>>
>>> Adrian
>>>
>>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>
>>>  wrote:
>>>
>>>  Hi all,
>>>>
>>>> I've created a test plan to put some load on a flow of pages we have.
>>>> I'm quite new to JMeter, and I have a small question on how to get the
>>>> information I'm looking for. I've got a working test plan, I can see
>>>> the samples, the throughput, etc, but I can't find anywhere what the
>>>> time was to execute this testplan, or a single loop of this testplan
>>>> when I execute it multiple times.
>>>>
>>>> Can someone give me a small heads up how I can record and view this
>>>> time?
>>>>
>>>> Kind regards,
>>>>
>>>> Erik Pragt
>>>>
>>>> ------------------------------**------------------------------**
>>>> ---------
>>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
>>>> For additional commands, e-mail: user-help@jmeter.apache.org
>>>>
>>>>
>>>>  ------------------------------**------------------------------**
>> ---------
>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
>> For additional commands, e-mail: user-help@jmeter.apache.org
>>
>>
>
> --
>
> Ing. Sergio Boso
>
> In caso di erronea ricezione da parte di persona diversa, siete pregati di
> eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> archivi e di volercelo comunicare immediatamente restituendoci il messaggio
> via e-mail al seguente indirizzosergio@**bosoconsulting.it<in...@bosoconsulting.it><mailto:
> sergioboso@yahoo.it>
> L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> indirizzo:sergio@**bosoconsulting.it<in...@bosoconsulting.it><mailto:
> sergioboso@yahoo.it>
>
>
>
>
>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<us...@jmeter.apache.org>
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>

Re: Display total execution time for test plan

Posted by sergio <se...@bosoconsulting.it>.
Hi Adrian and Eric,

maybe I'm missing some point, but to me the total duration of the test is rarely important nor predictable.

If you need it as a baseline, you can use an aggregate result listener, run some test (maybe with one or two users) and then
you can multiply the number of samples (eventually divided the number of loops executed) by the average execution time.
So you can easily have the net time you need to do a single loop.
This is net of time spent on timers.

But when you start having 1000 users, you have a lot of parallelizaton, but obviously not the 100% (that would be ideal).
Also in some case, you have to add the ramp-up time.

In my experience, we usually end up measuring the behaviour of few key transactions (e.g. submit the order, or login/logout), under 
different situations and loads,
The relationship between average, mean, 90nth % and max return an idea of the way things go.
Note that these transactions are also the longest.

A static page or an image takes few msec to download, and most of the time spent is due to the network latency,
which is not something we can easily optimize.

This is my point of view, feel free to share your thoughts.
best regards

Sergio

Il 20/03/2012 17:55, Erik Pragt ha scritto:
> Hi Adrian,
>
> Thanks for the super quick reply. I'm a bit surprised by your first
> remark though, so maybe I'm having a wrong approach here.
>
> I'm currently developing an application which might have some
> performance issues. Our current target is around 1000 simultaneous
> logged in users, and around 10 concurrent 'clicks'. My current
> approach was to sort of simulate that behavior in JMeter, check how
> long it takes for the simulated users to finish their flows, make some
> adjustments, test again, and check if my simulated users are faster
> than before. Based on this, I need the total execution time, but
> apparently this is not the usual approach, else it would certainly
> have been in there somewhere.
>
> Could you recommend what would be a better way to test my scenario?
> I'm not a performance rock star at all, so I'm very curious what would
> be an effective way in improving the application and using JMeter as
> the load generator in that.
>
> Kind regards,
>
> Erik Pragt
>
> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<as...@gmail.com>  wrote:
>> Hi Erik,
>>
>> A very interesting idea.
>>
>> You can find start / stop time in jmeter's log. When running from a console
>> in non-gui mode, you also get some more statistics then in GUI (how long
>> the test ran). You can also schedule a test to run for a certain amount of
>> time, or starting / stopping at certain hours (so you don't have to worry
>> about this stuff).
>>
>> If you are interested in response times, however, the sum of all requests,
>> then things get more complicated.
>>
>> Adrian
>>
>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<er...@jworks.nl>  wrote:
>>
>>> Hi all,
>>>
>>> I've created a test plan to put some load on a flow of pages we have.
>>> I'm quite new to JMeter, and I have a small question on how to get the
>>> information I'm looking for. I've got a working test plan, I can see
>>> the samples, the throughput, etc, but I can't find anywhere what the
>>> time was to execute this testplan, or a single loop of this testplan
>>> when I execute it multiple times.
>>>
>>> Can someone give me a small heads up how I can record and view this time?
>>>
>>> Kind regards,
>>>
>>> Erik Pragt
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
>>> For additional commands, e-mail: user-help@jmeter.apache.org
>>>
>>>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>


-- 

Ing. Sergio Boso

In caso di erronea ricezione da parte di persona diversa, siete pregati di eliminare il messaggio e i suoi allegati in modo 
definitivo dai vostri archivi e di volercelo comunicare immediatamente restituendoci il messaggio via e-mail al seguente 
indirizzosergio@bosoconsulting.it <ma...@yahoo.it>
L’interessato può, inoltre, esercitare tutti i diritti di accesso sui propri dati previsti dal decreto 196/2003, tra i quali i 
diritti di rettifica, aggiornamento e cancellazione, inviando un messaggio all’indirizzo:sergio@bosoconsulting.it 
<ma...@yahoo.it>





---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


RE: Display total execution time for test plan

Posted by "Robin D. Wilson" <rw...@gmail.com>.
I have a similar need, so I cobbled together a special threadgroup to time my actual 'test' threadgroups, and it can be used to time
the entire test plan as well... I'm not sure it would work well for the individual thread execution times, but those are really
going to be measured by the number of iterations divided into the total thread group time anyway. NOTE: this system really only
works if your threadgroups are run consecutively in the test plan - since the timers have to run after each threadgroup runs... But
it is a start.

BTW, I totally agree with you - this is something that should be included in the standard tools (e.g. fields in the summary report):
test execution elapsed time, and threadgroup execution elapsed time... The 'test execution elapsed time' needs to be a meta field in
the summary report - since it applies to the entire test. The 'threadgroup execution elapsed time' needs to be an optional row added
to the summary report output that identifies the threadgroup name and elapsed time.

Basically I add a threadgroup at the top of my test plan that gets the current time in milliseconds since epoch. Then after each
threadgroup that is actually 'sampling' my system, I add another threadgroup that updates the time elapsed with the current info.

If you save the below as a .jmx file, you can see what I'm talking about. Just add your threadgroup between the two that are defined
already, and run a test - it will show you a timer row _before_ the first thread group, and another _after_ the first thread group:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="2.2">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="TG Timer Config" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">true</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments"
testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
      <collectionProp name="TestPlan.thread_groups"/>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Start TG Timer" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel"
testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">0</stringProp>
        <longProp name="ThreadGroup.start_time">1320342967000</longProp>
        <longProp name="ThreadGroup.end_time">1320342967000</longProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <UserParameters guiclass="UserParametersGui" testclass="UserParameters" testname="User Parameters" enabled="true">
          <collectionProp name="UserParameters.names">
            <stringProp name="1185546556">PrevStartTime</stringProp>
            <stringProp name="-1987142487">PrevTGCount</stringProp>
            <stringProp name="0"></stringProp>
            <stringProp name="1182835273">ElapsedTime</stringProp>
            <stringProp name="-1999321243">threadStartTime</stringProp>
            <stringProp name="-1409654180">tgCount</stringProp>
            <stringProp name="1984987727">setTime</stringProp>
            <stringProp name="647878586">setTGCount</stringProp>
          </collectionProp>
          <collectionProp name="UserParameters.thread_values">
            <collectionProp name="-1834623057">
              <stringProp name="-2078679218">${__P(ThreadStartTime,0)}</stringProp>
              <stringProp name="46204838">${__P(TGCount,1)}</stringProp>
              <stringProp name="0"></stringProp>
              <stringProp name="48">0</stringProp>
              <stringProp name="1008029728">${__javaScript(var ms = new Date; ms.getTime();)}</stringProp>
              <stringProp name="49">1</stringProp>
              <stringProp name="1759500564">${__setProperty(ThreadStartTime, ${threadStartTime})}</stringProp>
              <stringProp name="-1525630572">${__setProperty(TGCount, ${tgCount})}</stringProp>
            </collectionProp>
          </collectionProp>
          <boolProp name="UserParameters.per_iteration">false</boolProp>
        </UserParameters>
        <hashTree/>
        <DebugSampler guiclass="TestBeanGUI" testclass="DebugSampler" testname="TG ${tgCount} Start: ${threadStartTime} - Elapsed:
${ElapsedTime}" enabled="true">
          <boolProp name="displayJMeterProperties">false</boolProp>
          <boolProp name="displayJMeterVariables">false</boolProp>
          <boolProp name="displaySystemProperties">false</boolProp>
        </DebugSampler>
        <hashTree/>
      </hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="TG Timer" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel"
testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">1</stringProp>
        <stringProp name="ThreadGroup.ramp_time">0</stringProp>
        <longProp name="ThreadGroup.start_time">1320342967000</longProp>
        <longProp name="ThreadGroup.end_time">1320342967000</longProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
      </ThreadGroup>
      <hashTree>
        <UserParameters guiclass="UserParametersGui" testclass="UserParameters" testname="User Parameters" enabled="true">
          <collectionProp name="UserParameters.names">
            <stringProp name="1185546556">PrevStartTime</stringProp>
            <stringProp name="-1987142487">PrevTGCount</stringProp>
            <stringProp name="0"></stringProp>
            <stringProp name="1182835273">ElapsedTime</stringProp>
            <stringProp name="-1999321243">threadStartTime</stringProp>
            <stringProp name="-1409654180">tgCount</stringProp>
            <stringProp name="1984987727">setTime</stringProp>
            <stringProp name="647878586">setTGCount</stringProp>
          </collectionProp>
          <collectionProp name="UserParameters.thread_values">
            <collectionProp name="-1451951225">
              <stringProp name="-2078679218">${__P(ThreadStartTime,0)}</stringProp>
              <stringProp name="46204838">${__P(TGCount,1)}</stringProp>
              <stringProp name="0"></stringProp>
              <stringProp name="-1898455245">${__javaScript(var ms = new Date; ms.getTime() - ${PrevStartTime};)}</stringProp>
              <stringProp name="1008029728">${__javaScript(var ms = new Date; ms.getTime();)}</stringProp>
              <stringProp name="-1527117377">${__javaScript(${PrevTGCount} + 1)}</stringProp>
              <stringProp name="1759500564">${__setProperty(ThreadStartTime, ${threadStartTime})}</stringProp>
              <stringProp name="-1525630572">${__setProperty(TGCount, ${tgCount})}</stringProp>
            </collectionProp>
          </collectionProp>
          <boolProp name="UserParameters.per_iteration">false</boolProp>
        </UserParameters>
        <hashTree/>
        <DebugSampler guiclass="TestBeanGUI" testclass="DebugSampler" testname="TG ${tgCount} Start: ${threadStartTime} - Elapsed:
${ElapsedTime}" enabled="true">
          <boolProp name="displayJMeterProperties">false</boolProp>
          <boolProp name="displayJMeterVariables">false</boolProp>
          <boolProp name="displaySystemProperties">false</boolProp>
        </DebugSampler>
        <hashTree/>
      </hashTree>
      <ResultCollector guiclass="SummaryReport" testclass="ResultCollector" testname="Summary Report" enabled="true">
        <boolProp name="ResultCollector.error_logging">false</boolProp>
        <objProp>
          <name>saveConfig</name>
          <value class="SampleSaveConfiguration">
            <time>true</time>
            <latency>true</latency>
            <timestamp>true</timestamp>
            <success>true</success>
            <label>true</label>
            <code>true</code>
            <message>true</message>
            <threadName>true</threadName>
            <dataType>true</dataType>
            <encoding>false</encoding>
            <assertions>true</assertions>
            <subresults>true</subresults>
            <responseData>false</responseData>
            <samplerData>false</samplerData>
            <xml>true</xml>
            <fieldNames>false</fieldNames>
            <responseHeaders>false</responseHeaders>
            <requestHeaders>false</requestHeaders>
            <responseDataOnError>false</responseDataOnError>
            <saveAssertionResultsFailureMessage>false</saveAssertionResultsFailureMessage>
            <assertionsResultsToSave>0</assertionsResultsToSave>
            <bytes>true</bytes>
          </value>
        </objProp>
        <stringProp name="filename"></stringProp>
      </ResultCollector>
      <hashTree/>
    </hashTree>
  </hashTree>
</jmeterTestPlan>
--
Robin D. Wilson
Sr. Director of Web Development
KingsIsle Entertainment, Inc.
VOICE: 512-777-1861
www.KingsIsle.com


-----Original Message-----
From: erik.pragt@gmail.com [mailto:erik.pragt@gmail.com] On Behalf Of Erik Pragt
Sent: Tuesday, March 20, 2012 11:55 AM
To: JMeter Users List
Subject: Re: Display total execution time for test plan

Hi Adrian,

Thanks for the super quick reply. I'm a bit surprised by your first
remark though, so maybe I'm having a wrong approach here.

I'm currently developing an application which might have some
performance issues. Our current target is around 1000 simultaneous
logged in users, and around 10 concurrent 'clicks'. My current
approach was to sort of simulate that behavior in JMeter, check how
long it takes for the simulated users to finish their flows, make some
adjustments, test again, and check if my simulated users are faster
than before. Based on this, I need the total execution time, but
apparently this is not the usual approach, else it would certainly
have been in there somewhere.

Could you recommend what would be a better way to test my scenario?
I'm not a performance rock star at all, so I'm very curious what would
be an effective way in improving the application and using JMeter as
the load generator in that.

Kind regards,

Erik Pragt

On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu <as...@gmail.com> wrote:
> Hi Erik,
>
> A very interesting idea.
>
> You can find start / stop time in jmeter's log. When running from a console
> in non-gui mode, you also get some more statistics then in GUI (how long
> the test ran). You can also schedule a test to run for a certain amount of
> time, or starting / stopping at certain hours (so you don't have to worry
> about this stuff).
>
> If you are interested in response times, however, the sum of all requests,
> then things get more complicated.
>
> Adrian
>
> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt <er...@jworks.nl> wrote:
>
>> Hi all,
>>
>> I've created a test plan to put some load on a flow of pages we have.
>> I'm quite new to JMeter, and I have a small question on how to get the
>> information I'm looking for. I've got a working test plan, I can see
>> the samples, the throughput, etc, but I can't find anywhere what the
>> time was to execute this testplan, or a single loop of this testplan
>> when I execute it multiple times.
>>
>> Can someone give me a small heads up how I can record and view this time?
>>
>> Kind regards,
>>
>> Erik Pragt
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
>> For additional commands, e-mail: user-help@jmeter.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Re: Display total execution time for test plan

Posted by Erik Pragt <er...@jworks.nl>.
Hi Adrian,

Thanks for the super quick reply. I'm a bit surprised by your first
remark though, so maybe I'm having a wrong approach here.

I'm currently developing an application which might have some
performance issues. Our current target is around 1000 simultaneous
logged in users, and around 10 concurrent 'clicks'. My current
approach was to sort of simulate that behavior in JMeter, check how
long it takes for the simulated users to finish their flows, make some
adjustments, test again, and check if my simulated users are faster
than before. Based on this, I need the total execution time, but
apparently this is not the usual approach, else it would certainly
have been in there somewhere.

Could you recommend what would be a better way to test my scenario?
I'm not a performance rock star at all, so I'm very curious what would
be an effective way in improving the application and using JMeter as
the load generator in that.

Kind regards,

Erik Pragt

On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu <as...@gmail.com> wrote:
> Hi Erik,
>
> A very interesting idea.
>
> You can find start / stop time in jmeter's log. When running from a console
> in non-gui mode, you also get some more statistics then in GUI (how long
> the test ran). You can also schedule a test to run for a certain amount of
> time, or starting / stopping at certain hours (so you don't have to worry
> about this stuff).
>
> If you are interested in response times, however, the sum of all requests,
> then things get more complicated.
>
> Adrian
>
> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt <er...@jworks.nl> wrote:
>
>> Hi all,
>>
>> I've created a test plan to put some load on a flow of pages we have.
>> I'm quite new to JMeter, and I have a small question on how to get the
>> information I'm looking for. I've got a working test plan, I can see
>> the samples, the throughput, etc, but I can't find anywhere what the
>> time was to execute this testplan, or a single loop of this testplan
>> when I execute it multiple times.
>>
>> Can someone give me a small heads up how I can record and view this time?
>>
>> Kind regards,
>>
>> Erik Pragt
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
>> For additional commands, e-mail: user-help@jmeter.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Re: Display total execution time for test plan

Posted by Adrian Speteanu <as...@gmail.com>.
Hi Erik,

A very interesting idea.

You can find start / stop time in jmeter's log. When running from a console
in non-gui mode, you also get some more statistics then in GUI (how long
the test ran). You can also schedule a test to run for a certain amount of
time, or starting / stopping at certain hours (so you don't have to worry
about this stuff).

If you are interested in response times, however, the sum of all requests,
then things get more complicated.

Adrian

On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt <er...@jworks.nl> wrote:

> Hi all,
>
> I've created a test plan to put some load on a flow of pages we have.
> I'm quite new to JMeter, and I have a small question on how to get the
> information I'm looking for. I've got a working test plan, I can see
> the samples, the throughput, etc, but I can't find anywhere what the
> time was to execute this testplan, or a single loop of this testplan
> when I execute it multiple times.
>
> Can someone give me a small heads up how I can record and view this time?
>
> Kind regards,
>
> Erik Pragt
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>
>