You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Shirish <sh...@gmail.com> on 2011/08/09 11:46:58 UTC

Diff between "Uniform Random Timer" and "Constant Timer"

Hello Friends,

I am working on a Automobile Web Product. My Product whenever there are lot
many loads at server and when server is not able to respond to any request
it creates a Heap Dump in logs folder at server side.
Performance Testing Requirement is to create different Thread Groups for
different functionalities and execute these thread groups with one user for
12 hrs.


In accordance to the requirement I created around 6 thread groups and
assigned one user to each TG. When I executed these TG's for 20 mins without
any timers i.e. "Uniform Random Timer" and "Constant Timer" then server just
got into the IDLE state and my Error % goes above 60.

But when I added a "Uniform Random Timer" with value 5000/10000/20000 to
Test Plan and Constant Timers between the Threads and ran it for
20mins/....8hrs it worked well and there was 0(zero) Error %.
Below are the screenshots of timers utilized.



with Constant Timer


Please tell me what does these timers are doing in my Test Plans? Does that
URTimer making sleep to each every thread group for 5000/10000/20000 or else
what? And what does this constant timers are doing are they waiting to
complete the specified time or else ?

Can we use Constant timers in between each thread request (whenever there is
a transition between web pages) or what should be the ideal practice so as
to make or thread requests for a particular time while executing?.

Awaiting for your attention and assistance.


Thanks,
Shirish G.

Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Felix Frank <ff...@mpexnet.de>.
Good, point, but then the original requirements are rather insane:
"Performance Testing Requirement is to create different Thread Groups
for different functionalities and execute these thread groups with one
user for 12 hrs."

I'm not sure about the practical use either but, well, there it is.

Of course "one user" is ambiguous and may well mean "using a single set
of login credentials" and say nothing about thread concurrency. If
that's indeed the case, the OP should take especial note of the points
just raised.

On 08/09/2011 01:11 PM, Oliver Lloyd wrote:
> There's a rather fundamental point missing here. You can plan your test to
> run with X threads with zero timer controls or X threads with lots of timer
> controls, this might give you 1 TPS, 0.0005 TPS or 3678 TPS. But so what?
> Until you actually work out what your required target load is then what's
> the point? Why are you even bothering performance testing if you don't know
> what your requirements are? How will you know when to stop?
> 
> Timers are tools that allow you to define a target thorough, esp/ the
> Constant Throughput Timer and not having them is usually a mistake. But
> simply adding them alone will not solve this problem for you. Just because
> they exist in your plan it does not make your plan correct!

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Adrian Speteanu <as...@gmail.com>.
On Tue, Aug 9, 2011 at 7:20 PM, Oliver Lloyd <ol...@hotmail.com>wrote:

> I know that it can feel like a luxury but I see it more like a mindset.
> It's
> not that hard to get everyone together in a room and work out some high
> level objectives - every project can do this, even if the best you can do
> is
> 'more than 5, less then 500'! This is very loose but it is still a
> requirement that you can test to.
>
> But yes, there is also a place for capacity /exploratory / stress testing
> (it has many names). But still, without any idea of what the system will
> get
> used for how can you identify problems? If a system call takes 4 minutes to
> respond, is that bad? Are you sure? I have system calls that take this long
> and it is within requirements. What if another call takes 300ms, is that
> OK?
> I have scenarios where this would be failed and rejected because we know
> that this call is made very, very often and the requirement is for 98% at
> 200ms.
>
> I know that there's a temptation to just 'see what it can do and tune it a
> bit' and, OK, sometimes it works fine. It's just not ideal.
>
> Don't you just love it how the rhetoric of how to and best-practices gets
everybody worked up?

The point is not that it is not preferred to be like that, the point is that
in reality we have lots of situations where those people who can make those
decision either don't have the time & energy or are too lazy to bother. Not
to mention the situation where they are thousands of kilometres apart and
even worst, new project for an entire company and very little data on the
workload, so you have to make assumptions anyway - just as bad as not having
requirements. Learning by testing will help everybody decide better what is
acceptable or not and it is the development team that can make a better
educated guess than anybody else. The approach should be flexible, adapteble


Although I agree that it would ideal to have a better know-how, my angle is
as it is because the OP was trying to test end-user behaviour instead of
performance testing the application business. Its the 1214817181917575919th
time that this approach is advised against in the mailing list and if the
test-plan is wrong it is because the approach to this type of testing tries
to plan ahead waterfall-style before having all the required data when the
approach should have been more adaptive. There is no way to make sure that
the application will support constant throughput before testing it, that
approach can be used when the application is proven stable under load/stress
situations. BUT then again, how many situations are there in real life where
load is constant?



> --
> View this message in context:
> http://jmeter.512774.n5.nabble.com/Diff-between-Uniform-Random-Timer-and-Constant-Timer-tp4681432p4682675.html
> Sent from the JMeter - User mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Oliver Lloyd <ol...@hotmail.com>.
I know that it can feel like a luxury but I see it more like a mindset. It's
not that hard to get everyone together in a room and work out some high
level objectives - every project can do this, even if the best you can do is
'more than 5, less then 500'! This is very loose but it is still a
requirement that you can test to.

But yes, there is also a place for capacity /exploratory / stress testing
(it has many names). But still, without any idea of what the system will get
used for how can you identify problems? If a system call takes 4 minutes to
respond, is that bad? Are you sure? I have system calls that take this long
and it is within requirements. What if another call takes 300ms, is that OK?
I have scenarios where this would be failed and rejected because we know
that this call is made very, very often and the requirement is for 98% at
200ms.

I know that there's a temptation to just 'see what it can do and tune it a
bit' and, OK, sometimes it works fine. It's just not ideal.

--
View this message in context: http://jmeter.512774.n5.nabble.com/Diff-between-Uniform-Random-Timer-and-Constant-Timer-tp4681432p4682675.html
Sent from the JMeter - User mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Adrian Speteanu <as...@gmail.com>.
On Tue, Aug 9, 2011 at 2:11 PM, Oliver Lloyd <ol...@hotmail.com>wrote:

> There's a rather fundamental point missing here. You can plan your test to
> run with X threads with zero timer controls or X threads with lots of timer
> controls, this might give you 1 TPS, 0.0005 TPS or 3678 TPS. But so what?
> Until you actually work out what your required target load is then what's
> the point? Why are you even bothering performance testing if you don't know
> what your requirements are? How will you know when to stop?
>

Having clear performance requirements is usually a luxury that not many can
afford. The purpose of performance testing can also be to give an image of
how the application will work. While you get requirements such as high
availability, response times bellow 2-3s, how much amount of new data per
day or other general details - for a new application you might not get
details such as the exact structure of the input data or other insane
situations that I did run into with actual projects (example: user generated
content, documents) or the exact workload distribution (for new projects
again, it is an information rarely accurate). The goal was to make
scenarios: it will work up to this and this or that might happen OR the
application is stable when using this or that input data.

Optimal TPS you can establish yourself also, with tests and is not
necessarily needed prior to testing - you start with little and gradually
increase the maximum allowed (using the constant throughput timer for
example) until you have the limit of the system when other requirements are
not met anymore (like response times dropping too much - also not needed as
a requirements, the development team can easily establish how much is too
much).


>
> Timers are tools that allow you to define a target thorough, esp/ the
> Constant Throughput Timer and not having them is usually a mistake. But
> simply adding them alone will not solve this problem for you. Just because
> they exist in your plan it does not make your plan correct!
>


Absolutely true. You will only realise if you plan is correct by monitoring
the application and seeing the impact of each performance test. From
personal opinion, it is the worst idea to determine the load you will test
the application with before testing for the first time - especially with new
technologies (new to the team, to those who test). It will also create false
expectations and people get frustrated not getting the results they wanted
from the beginning. Optimising the application to handle load is a process
and requires time. Establishing the load you want is best for acceptance
testing-type situations, where the application is supposedly already stable
and proven and is needed for a particular scenario - but I do have to say
that these tests are usually rather synthetic and I don't see how this is
realistic for a larger application, i.e. 100 thousand - a few million
different active users per month.



> --
> View this message in context:
> http://jmeter.512774.n5.nabble.com/Diff-between-Uniform-Random-Timer-and-Constant-Timer-tp4681432p4681622.html
> Sent from the JMeter - User mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Oliver Lloyd <ol...@hotmail.com>.
There's a rather fundamental point missing here. You can plan your test to
run with X threads with zero timer controls or X threads with lots of timer
controls, this might give you 1 TPS, 0.0005 TPS or 3678 TPS. But so what?
Until you actually work out what your required target load is then what's
the point? Why are you even bothering performance testing if you don't know
what your requirements are? How will you know when to stop?

Timers are tools that allow you to define a target thorough, esp/ the
Constant Throughput Timer and not having them is usually a mistake. But
simply adding them alone will not solve this problem for you. Just because
they exist in your plan it does not make your plan correct!

--
View this message in context: http://jmeter.512774.n5.nabble.com/Diff-between-Uniform-Random-Timer-and-Constant-Timer-tp4681432p4681622.html
Sent from the JMeter - User mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Shirish <sh...@gmail.com>.
Thanks a lot Felix for your elaboration.

On Tue, Aug 9, 2011 at 3:58 PM, Felix Frank <ff...@mpexnet.de> wrote:

> Hi,
>
> On 08/09/2011 12:22 PM, Shirish wrote:
> > By "Idle" state of the server means Server was not responding at all to
> any
> > requests. I read the User Manual, just correct me if I misunderstood
> > anything.
>
> I'd probably call that overloaded, quite the opposite of idle :-)
>
> > In below screenshot, I placed various Constant timers in between thread
> > request, it means after executing the thread request JMeter will take a
> > pause of specified time as mentioned in the timer.
>
> Actually, the delay happens *before* the request is executed, if the
> structure is this
>
> + ThreadGroup
> ++ Sampler
> +++ Timer
> ++ Sampler
> +++ Timer
> ...
>
> Don't do this:
>
> + ThreadGroup
> ++ Sampler
> ++ Timer
> ++ Sampler
> ++ Timer
> ...
>
> Each Timer will get applied to all Samplers this way, which is not what
> you want
>
> > Or else do I need to use other timers to make my requests wait for a
> > particular time instead of using these many constant timers? If yes with
> > which should I go?
>
> Normally, pick a Timer (uniform may just be sufficient for your needs)
> and place one instance on the same scope as your Samplers:
>
> + Thread Group
> ++ Sampler
> ++ Sampler
> ++ Sampler
> ...
> ++ Timer
>
> This way, each Sample will be taken with a delay.
>
> > also Felix please tell me; as in above when I add a Uniform Random timer
> > with 20000 ms to the Test Plan then how does this time is going to be
> > distributed amongst the TG's lets say 5 TG's?
>
> I'm not sure. If this works at all, each Sampler in each Thread Group
> will get the full delay applied. If it doesn't, well, then you get no
> delays at all.
>
> HTH,
> Felix
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>


-- 


Thanks,
Shirish G.

Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Felix Frank <ff...@mpexnet.de>.
Hi,

On 08/09/2011 12:22 PM, Shirish wrote:
> By "Idle" state of the server means Server was not responding at all to any
> requests. I read the User Manual, just correct me if I misunderstood
> anything.

I'd probably call that overloaded, quite the opposite of idle :-)

> In below screenshot, I placed various Constant timers in between thread
> request, it means after executing the thread request JMeter will take a
> pause of specified time as mentioned in the timer.

Actually, the delay happens *before* the request is executed, if the
structure is this

+ ThreadGroup
++ Sampler
+++ Timer
++ Sampler
+++ Timer
...

Don't do this:

+ ThreadGroup
++ Sampler
++ Timer
++ Sampler
++ Timer
...

Each Timer will get applied to all Samplers this way, which is not what
you want

> Or else do I need to use other timers to make my requests wait for a
> particular time instead of using these many constant timers? If yes with
> which should I go?

Normally, pick a Timer (uniform may just be sufficient for your needs)
and place one instance on the same scope as your Samplers:

+ Thread Group
++ Sampler
++ Sampler
++ Sampler
...
++ Timer

This way, each Sample will be taken with a delay.

> also Felix please tell me; as in above when I add a Uniform Random timer
> with 20000 ms to the Test Plan then how does this time is going to be
> distributed amongst the TG's lets say 5 TG's?

I'm not sure. If this works at all, each Sampler in each Thread Group
will get the full delay applied. If it doesn't, well, then you get no
delays at all.

HTH,
Felix

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Shirish <sh...@gmail.com>.
Hello Felix,

Thanks a lot lor your help and time.

By "Idle" state of the server means Server was not responding at all to any
requests. I read the User Manual, just correct me if I misunderstood
anything.
In below screenshot, I placed various Constant timers in between thread
request, it means after executing the thread request JMeter will take a
pause of specified time as mentioned in the timer.



Or else do I need to use other timers to make my requests wait for a
particular time instead of using these many constant timers? If yes with
which should I go?

also Felix please tell me; as in above when I add a Uniform Random timer
with 20000 ms to the Test Plan then how does this time is going to be
distributed amongst the TG's lets say 5 TG's?

Thanks a lot for your help.

Thanks and Regards
Shirish G.

On Tue, Aug 9, 2011 at 3:26 PM, Felix Frank <ff...@mpexnet.de> wrote:

> Hi,
>
> On 08/09/2011 11:46 AM, Shirish wrote:
> > When I executed these TG's for 20 mins without
> > any timers i.e. "Uniform Random Timer" and "Constant Timer" then server
> just
> > got into the IDLE state and my Error % goes above 60.
>
> running a test without any timers can easily overload a server that runs
> a complex application and is not suited to reply to more than a couple
> of requests per minute/hour/day...
>
> I don't really comprehend what you mean by "idle" state. Is that in the
> server's log? Is the server still accessible from outside your test?
>
> > Please tell me what does these timers are doing in my Test Plans?
>
> Have you read the pertinent parts of the Jmeter User's Manual?
>
> http://jakarta.apache.org/jmeter/usermanual/component_reference.html#timers
>
> If you haven't, please look it up. If any details remain unclear, feel
> free to ask for more specifics here.
>
> > Awaiting for your attention and assistance.
>
> Please be aware that this is a group of volunteers. Nobody gets paid for
> their advice and we dispense it as we see fit.
>
> Regards,
> Felix
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>


-- 


Thanks,
Shirish G.

Re: Diff between "Uniform Random Timer" and "Constant Timer"

Posted by Felix Frank <ff...@mpexnet.de>.
Hi,

On 08/09/2011 11:46 AM, Shirish wrote:
> When I executed these TG's for 20 mins without
> any timers i.e. "Uniform Random Timer" and "Constant Timer" then server just
> got into the IDLE state and my Error % goes above 60.

running a test without any timers can easily overload a server that runs
a complex application and is not suited to reply to more than a couple
of requests per minute/hour/day...

I don't really comprehend what you mean by "idle" state. Is that in the
server's log? Is the server still accessible from outside your test?

> Please tell me what does these timers are doing in my Test Plans?

Have you read the pertinent parts of the Jmeter User's Manual?

http://jakarta.apache.org/jmeter/usermanual/component_reference.html#timers

If you haven't, please look it up. If any details remain unclear, feel
free to ask for more specifics here.

> Awaiting for your attention and assistance.

Please be aware that this is a group of volunteers. Nobody gets paid for
their advice and we dispense it as we see fit.

Regards,
Felix

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org