You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by William Oberman <ob...@gmail.com> on 2010/07/26 19:40:49 UTC

constant rate testing (again)

I found an old email thread about doing constant rate testing in the
archives.  I wanted to kick the idea up again, but first I'll review
the basic situation, and past advice:
-I want to simulate a constant inbound rate, so that if the server
falls behind the inbound load keeps coming and crushes the server
-Jmeter has a fixed pool size + every thread waits for a response, so
jmeter will in effect "slow down to accommodate" the load
-The advice from the old thread (my interpretation) was basically find
a thread pool size large enough to crush the server

The "next step" problem the user in the old thread appeared to be
having was the inability to scale Jmeter to "crushing load" levels.  I
don't have that problem... I can create thread pools that create
crushing load.  But, I'd vastly prefer to create the real world use
case of constant inbound load.  I was wondering if there is a clever
use of the Constant Throughput Timer (CTT) that will help?

I haven't used the CTT much before, but based on the description, it
seems promising.   My basic idea was going to be:
-Have a thread group with size greater than the "crushing level"
-Use the CTT (or CTTs) to throttle the huge thread pool to keep most
threads idle
-As the server experiences load and slows down, the CTTs will continue
to let previously idle threads run (to keep the rate at a certain
level), hopefully slowing increasing the load to the crushing point

I'm obviously still playing around, trial and error style.  But, I
thought I'd check here in case there is a fundamental flaw I'm missing
(or an easier approach).  Or is all of this complicated setup == a
large thread group + long ramp up period?  I'll try that too...

will

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by Felix Frank <ff...@mpexnet.de>.
> Or an easier option might be having two scripts, one normal & one with
> timeouts, and running them both at the same time?

When implementing a timeout driven approach, using those two groups is a
must, because the timeout group *will* generate a large number of
errors, and thus screw up your stats. You want to know how many requests
"really" timeout, i.e. when having browser-like timeout values. Also,
you want to know just how slow your servers get, and that is what a
second thread group will have to find out. It will be much smaller (say
15-25 threads) in my upcoming test plan.

Still, whether all of this makes any sense in the end remains to be seen.

Cheers,
Felix

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by William Oberman <ob...@gmail.com>.
I started the timeout angle last night, and it was showing similar
promise (with similar flaws), but it's great to know I'm on a good
path!

One idea I had overnight (the brain doesn't turn off, even when
sleeping I guess): for things that require a full result (to parse the
output):
-Add a two new variables "XYZcount" and "maxXYZ" (if the resource to
be parsed is XYZ)
-Add two if's: count < max, count >= max
-When less than max, don't have timeouts on XYZ, increase count after
completed parsing XYZ, decrease count when complete with all child
tasks
-When >= max, put a timeout on XYZ

I think by playing with total number of threads, and number of threads
that do a "full XYZ" interaction, I can generate fairly high request
load while simultaneously generating "real load".

Or an easier option might be having two scripts, one normal & one with
timeouts, and running them both at the same time?

will

On Tue, Jul 27, 2010 at 2:27 AM, Felix Frank <ff...@mpexnet.de> wrote:
> Hi,
>
> that sounds familiar, and is more or less the point of my original inquiry.
>
> I managed to work around the problem using Connect and Response Timeouts
> for my HTTP Samplers. This is adequate for e.g. stressing an stunnel
> reverse proxy. However, I'm now facing a situation where I want the
> crushing load to navigate the site, and where the timeouts will make it
> impossible to extract good regexes. I guess the test plan will have to
> "flatten out" somewhat, fetching e.g. the Homepage w/o timeout, then
> repeatedly "clicking" different links found there (I hope that will work
> at all).
>
> In all, I have found the ability to use timeouts in requests for
> generating huger loads a large plus for Jmeter. The apache benchmark
> will never exceed the server's max req/s, and httperf, while it will
> easily do that, has performance/stability issues of its own. Note that
> timeouts will result in a higher number of requests, but that your
> webserver will still not be stomped the same way it would be by a large
> number of real clients, since due to the very timeouts, it won't notice
> many of them.
>
> I was going to see wether Jmeter in the cloud will generate more stress,
> but I doubt it, because of the effect you described.
> If anyone has more findings towards that end, it would be much appreciated.
>
> Cheers,
> Felix
>
> On 07/26/2010 10:32 PM, William Oberman wrote:
>> Well, this is weird/irritating.  No matter what I do I can't create
>> more than certain amount of load with JMeter.  For example, if I run
>> one server at full throttle, I might get 75 req/sec.  If I run two
>> servers with the same size thread pool, I then get ~37 req/sec.  If I
>> run three servers with the same size thread pool, I get 25 req/sec.
>> And so on.
>>
>> I guess this problem is more complicated than I thought without Jmeter
>> having a specific feature to generate constant inbound load (or
>> dropping connections slower than X seconds, which I think would also
>> work)
>>
>> will
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by Felix Frank <ff...@mpexnet.de>.
Hi,

that sounds familiar, and is more or less the point of my original inquiry.

I managed to work around the problem using Connect and Response Timeouts
for my HTTP Samplers. This is adequate for e.g. stressing an stunnel
reverse proxy. However, I'm now facing a situation where I want the
crushing load to navigate the site, and where the timeouts will make it
impossible to extract good regexes. I guess the test plan will have to
"flatten out" somewhat, fetching e.g. the Homepage w/o timeout, then
repeatedly "clicking" different links found there (I hope that will work
at all).

In all, I have found the ability to use timeouts in requests for
generating huger loads a large plus for Jmeter. The apache benchmark
will never exceed the server's max req/s, and httperf, while it will
easily do that, has performance/stability issues of its own. Note that
timeouts will result in a higher number of requests, but that your
webserver will still not be stomped the same way it would be by a large
number of real clients, since due to the very timeouts, it won't notice
many of them.

I was going to see wether Jmeter in the cloud will generate more stress,
but I doubt it, because of the effect you described.
If anyone has more findings towards that end, it would be much appreciated.

Cheers,
Felix

On 07/26/2010 10:32 PM, William Oberman wrote:
> Well, this is weird/irritating.  No matter what I do I can't create
> more than certain amount of load with JMeter.  For example, if I run
> one server at full throttle, I might get 75 req/sec.  If I run two
> servers with the same size thread pool, I then get ~37 req/sec.  If I
> run three servers with the same size thread pool, I get 25 req/sec.
> And so on.
> 
> I guess this problem is more complicated than I thought without Jmeter
> having a specific feature to generate constant inbound load (or
> dropping connections slower than X seconds, which I think would also
> work)
> 
> will

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by Felix Frank <ff...@mpexnet.de>.
Unlikely. I think the notion of "Jmeter slowing down to accomodate the
load" is quite accurate.

> Is the load reaching the servers from the other machines? Looks like only
> the first machine load is able to reach the servers.

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by William Oberman <ob...@gmail.com>.
In my case, I act as a webservice, and I get a constant rate of
inbound requests no matter what the responsivity of my server is like.
 As for what I'd expect users to do, I'm with Felix, I think users
click refresh no matter what (as I certainly do when a server gets
unresponsive!  maybe it will work this time) ;-)

will

On Sun, Aug 8, 2010 at 2:21 PM, sebb <se...@gmail.com> wrote:
> On 28 July 2010 08:13, Felix Frank <ff...@mpexnet.de> wrote:
>> Hi Deepak,
>>
>> all of the below is true and quite accurate. The trouble with Jmeter is
>> that it is too "patient", and even starting 1000 threads or more won't
>> inject the same level of stress to on your server as a couple hundred
>> real world users would. That's because Jmeter will gladly stand by for
>> minutes at a time. Finding your throughput plateau is fine and all, but
>> it would be nice if I could wreck the webserver the same way a swarm of
>> real users will.
>
> What you appear to be saying is that real users will give up waiting
> for a response if it takes too long, and resubmit the request.
> Is this really how users behave? I would have expected them to do
> something else, and try again later.
>
> But if your users really do keep hitting the unresponsive server, then
> by all means use timeouts.
>
> Also consider adding an Assertion to fail any samples that take too long.
>
> If the server does not respond sufficiently quickly under load, then
> that is a problem that needs to be addressed.
>
>> Regards,
>> Felix
>>
>> On 07/27/2010 10:57 PM, Deepak Goel wrote:
>>> Hey
>>>
>>> Namaskara~Nalama~Guten Tag
>>>
>>> Just another though to this:
>>>
>>> If your load is reaching the servers, looks like the max load which your
>>> server system can handle is that of one Jmeter server. When you add more
>>> servers, the throughput will reduce as the max throughput of the system has
>>> already been reached. After the max throughput has been reached, if you
>>> increase the load, the throughput starts dropping as your server cannot
>>> handle so many concurrent sessions simultaneously creating an overhead on
>>> the execution of all the request in the system.
>>>
>>> For any system, you have to know what is the max throughput which it can
>>> achieve beyond which the response time starts increasing exponentially. The
>>> throughput then reaches a plateau, and if you increase the load further the
>>> throughput would start decreasing and the system might even crash.
>>>
>>> I guess thats what happens in real world scenarios too. For example: In
>>> normal shopping periods, the system is able to manage the real user load
>>> with reasonable response times. During festive times, the system gets too
>>> drained out with the incoming request, and the response time increases
>>> exponentially. This causes a constant throughput and sometimes even the
>>> system to crash.
>>>
>>> Did you try this option?
>>> *****************************************************
>>> Or is all of this complicated setup == a
>>>> large thread group + long ramp up period?
>>> *****************************************************
>>> Deepak
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by sebb <se...@gmail.com>.
On 28 July 2010 08:13, Felix Frank <ff...@mpexnet.de> wrote:
> Hi Deepak,
>
> all of the below is true and quite accurate. The trouble with Jmeter is
> that it is too "patient", and even starting 1000 threads or more won't
> inject the same level of stress to on your server as a couple hundred
> real world users would. That's because Jmeter will gladly stand by for
> minutes at a time. Finding your throughput plateau is fine and all, but
> it would be nice if I could wreck the webserver the same way a swarm of
> real users will.

What you appear to be saying is that real users will give up waiting
for a response if it takes too long, and resubmit the request.
Is this really how users behave? I would have expected them to do
something else, and try again later.

But if your users really do keep hitting the unresponsive server, then
by all means use timeouts.

Also consider adding an Assertion to fail any samples that take too long.

If the server does not respond sufficiently quickly under load, then
that is a problem that needs to be addressed.

> Regards,
> Felix
>
> On 07/27/2010 10:57 PM, Deepak Goel wrote:
>> Hey
>>
>> Namaskara~Nalama~Guten Tag
>>
>> Just another though to this:
>>
>> If your load is reaching the servers, looks like the max load which your
>> server system can handle is that of one Jmeter server. When you add more
>> servers, the throughput will reduce as the max throughput of the system has
>> already been reached. After the max throughput has been reached, if you
>> increase the load, the throughput starts dropping as your server cannot
>> handle so many concurrent sessions simultaneously creating an overhead on
>> the execution of all the request in the system.
>>
>> For any system, you have to know what is the max throughput which it can
>> achieve beyond which the response time starts increasing exponentially. The
>> throughput then reaches a plateau, and if you increase the load further the
>> throughput would start decreasing and the system might even crash.
>>
>> I guess thats what happens in real world scenarios too. For example: In
>> normal shopping periods, the system is able to manage the real user load
>> with reasonable response times. During festive times, the system gets too
>> drained out with the incoming request, and the response time increases
>> exponentially. This causes a constant throughput and sometimes even the
>> system to crash.
>>
>> Did you try this option?
>> *****************************************************
>> Or is all of this complicated setup == a
>>> large thread group + long ramp up period?
>> *****************************************************
>> Deepak
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by Deepak Goel <de...@gmail.com>.
Hey Felix

Namaskara~Nalama~Guten Tag

I did try your scenario with Webload (www.webload.org) and it worked fine.
It could crash the webserver. I am unsure why Jmeter is waiting. Is your
application response time increasing so much, that no mater how much you
increase the threads, nothing happens.

This option should work in all probability.

> Or is all of this complicated setup == a
>> large thread group + long ramp up period?

Deepak
   --
Keigu

Deepak
+91-9765089593
deicool@gmail.com

Skype: thumsupdeicool
Google talk: deicool
Blog: http://loveandfearless.wordpress.com
Facebook: http://www.facebook.com/deicool

Check out my Work at:
LinkedIn: http://in.linkedin.com/in/thumsupdeicool

"Contribute to the world, environment and more : http://www.gridrepublic.org
"


On Wed, Jul 28, 2010 at 12:43 PM, Felix Frank <ff...@mpexnet.de> wrote:

> Hi Deepak,
>
> all of the below is true and quite accurate. The trouble with Jmeter is
> that it is too "patient", and even starting 1000 threads or more won't
> inject the same level of stress to on your server as a couple hundred
> real world users would. That's because Jmeter will gladly stand by for
> minutes at a time. Finding your throughput plateau is fine and all, but
> it would be nice if I could wreck the webserver the same way a swarm of
> real users will.
>
> Regards,
> Felix
>
> On 07/27/2010 10:57 PM, Deepak Goel wrote:
> > Hey
> >
> > Namaskara~Nalama~Guten Tag
> >
> > Just another though to this:
> >
> > If your load is reaching the servers, looks like the max load which your
> > server system can handle is that of one Jmeter server. When you add more
> > servers, the throughput will reduce as the max throughput of the system
> has
> > already been reached. After the max throughput has been reached, if you
> > increase the load, the throughput starts dropping as your server cannot
> > handle so many concurrent sessions simultaneously creating an overhead on
> > the execution of all the request in the system.
> >
> > For any system, you have to know what is the max throughput which it can
> > achieve beyond which the response time starts increasing exponentially.
> The
> > throughput then reaches a plateau, and if you increase the load further
> the
> > throughput would start decreasing and the system might even crash.
> >
> > I guess thats what happens in real world scenarios too. For example: In
> > normal shopping periods, the system is able to manage the real user load
> > with reasonable response times. During festive times, the system gets too
> > drained out with the incoming request, and the response time increases
> > exponentially. This causes a constant throughput and sometimes even the
> > system to crash.
> >
> > Did you try this option?
> > *****************************************************
> > Or is all of this complicated setup == a
> >> large thread group + long ramp up period?
> > *****************************************************
> > Deepak
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: constant rate testing (again)

Posted by Felix Frank <ff...@mpexnet.de>.
Hi Deepak,

all of the below is true and quite accurate. The trouble with Jmeter is
that it is too "patient", and even starting 1000 threads or more won't
inject the same level of stress to on your server as a couple hundred
real world users would. That's because Jmeter will gladly stand by for
minutes at a time. Finding your throughput plateau is fine and all, but
it would be nice if I could wreck the webserver the same way a swarm of
real users will.

Regards,
Felix

On 07/27/2010 10:57 PM, Deepak Goel wrote:
> Hey
> 
> Namaskara~Nalama~Guten Tag
> 
> Just another though to this:
> 
> If your load is reaching the servers, looks like the max load which your
> server system can handle is that of one Jmeter server. When you add more
> servers, the throughput will reduce as the max throughput of the system has
> already been reached. After the max throughput has been reached, if you
> increase the load, the throughput starts dropping as your server cannot
> handle so many concurrent sessions simultaneously creating an overhead on
> the execution of all the request in the system.
> 
> For any system, you have to know what is the max throughput which it can
> achieve beyond which the response time starts increasing exponentially. The
> throughput then reaches a plateau, and if you increase the load further the
> throughput would start decreasing and the system might even crash.
> 
> I guess thats what happens in real world scenarios too. For example: In
> normal shopping periods, the system is able to manage the real user load
> with reasonable response times. During festive times, the system gets too
> drained out with the incoming request, and the response time increases
> exponentially. This causes a constant throughput and sometimes even the
> system to crash.
> 
> Did you try this option?
> *****************************************************
> Or is all of this complicated setup == a
>> large thread group + long ramp up period?
> *****************************************************
> Deepak
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: constant rate testing (again)

Posted by Deepak Goel <de...@gmail.com>.
Hey

Namaskara~Nalama~Guten Tag

Just another though to this:

If your load is reaching the servers, looks like the max load which your
server system can handle is that of one Jmeter server. When you add more
servers, the throughput will reduce as the max throughput of the system has
already been reached. After the max throughput has been reached, if you
increase the load, the throughput starts dropping as your server cannot
handle so many concurrent sessions simultaneously creating an overhead on
the execution of all the request in the system.

For any system, you have to know what is the max throughput which it can
achieve beyond which the response time starts increasing exponentially. The
throughput then reaches a plateau, and if you increase the load further the
throughput would start decreasing and the system might even crash.

I guess thats what happens in real world scenarios too. For example: In
normal shopping periods, the system is able to manage the real user load
with reasonable response times. During festive times, the system gets too
drained out with the incoming request, and the response time increases
exponentially. This causes a constant throughput and sometimes even the
system to crash.

Did you try this option?
*****************************************************
Or is all of this complicated setup == a
> large thread group + long ramp up period?
*****************************************************
Deepak

   --
Keigu

Deepak
+91-9765089593
deicool@gmail.com

Skype: thumsupdeicool
Google talk: deicool
Blog: http://loveandfearless.wordpress.com
Facebook: http://www.facebook.com/deicool

Check out my Work at:
LinkedIn: http://in.linkedin.com/in/thumsupdeicool

"Contribute to the world, environment and more : http://www.gridrepublic.org
"


On Tue, Jul 27, 2010 at 9:33 AM, Deepak Goel <de...@gmail.com> wrote:

> Hey
>
> Namaskara~Nalama~Guten Tag
>
> Is the load reaching the servers from the other machines? Looks like only
> the first machine load is able to reach the servers.
>
> Deepak
>    --
> Keigu
>
> Deepak
> +91-9765089593
> deicool@gmail.com
>
> Skype: thumsupdeicool
> Google talk: deicool
> Blog: http://loveandfearless.wordpress.com
> Facebook: http://www.facebook.com/deicool
>
> Check out my Work at:
> LinkedIn: http://in.linkedin.com/in/thumsupdeicool
>
> "Contribute to the world, environment and more :
> http://www.gridrepublic.org"
>
>
>
> On Tue, Jul 27, 2010 at 2:02 AM, William Oberman <ob...@gmail.com>wrote:
>
>> Well, this is weird/irritating.  No matter what I do I can't create
>> more than certain amount of load with JMeter.  For example, if I run
>> one server at full throttle, I might get 75 req/sec.  If I run two
>> servers with the same size thread pool, I then get ~37 req/sec.  If I
>> run three servers with the same size thread pool, I get 25 req/sec.
>> And so on.
>>
>> I guess this problem is more complicated than I thought without Jmeter
>> having a specific feature to generate constant inbound load (or
>> dropping connections slower than X seconds, which I think would also
>> work)
>>
>> will
>>
>> On Mon, Jul 26, 2010 at 1:40 PM, William Oberman <ob...@gmail.com>
>> wrote:
>> > I found an old email thread about doing constant rate testing in the
>> > archives.  I wanted to kick the idea up again, but first I'll review
>> > the basic situation, and past advice:
>> > -I want to simulate a constant inbound rate, so that if the server
>> > falls behind the inbound load keeps coming and crushes the server
>> > -Jmeter has a fixed pool size + every thread waits for a response, so
>> > jmeter will in effect "slow down to accommodate" the load
>> > -The advice from the old thread (my interpretation) was basically find
>> > a thread pool size large enough to crush the server
>> >
>> > The "next step" problem the user in the old thread appeared to be
>> > having was the inability to scale Jmeter to "crushing load" levels.  I
>> > don't have that problem... I can create thread pools that create
>> > crushing load.  But, I'd vastly prefer to create the real world use
>> > case of constant inbound load.  I was wondering if there is a clever
>> > use of the Constant Throughput Timer (CTT) that will help?
>> >
>> > I haven't used the CTT much before, but based on the description, it
>> > seems promising.   My basic idea was going to be:
>> > -Have a thread group with size greater than the "crushing level"
>> > -Use the CTT (or CTTs) to throttle the huge thread pool to keep most
>> > threads idle
>> > -As the server experiences load and slows down, the CTTs will continue
>> > to let previously idle threads run (to keep the rate at a certain
>> > level), hopefully slowing increasing the load to the crushing point
>> >
>> > I'm obviously still playing around, trial and error style.  But, I
>> > thought I'd check here in case there is a fundamental flaw I'm missing
>> > (or an easier approach).  Or is all of this complicated setup == a
>> > large thread group + long ramp up period?  I'll try that too...
>> >
>> > will
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>>
>>
>

Re: constant rate testing (again)

Posted by Deepak Goel <de...@gmail.com>.
Hey

Namaskara~Nalama~Guten Tag

Is the load reaching the servers from the other machines? Looks like only
the first machine load is able to reach the servers.

Deepak
   --
Keigu

Deepak
+91-9765089593
deicool@gmail.com

Skype: thumsupdeicool
Google talk: deicool
Blog: http://loveandfearless.wordpress.com
Facebook: http://www.facebook.com/deicool

Check out my Work at:
LinkedIn: http://in.linkedin.com/in/thumsupdeicool

"Contribute to the world, environment and more : http://www.gridrepublic.org
"


On Tue, Jul 27, 2010 at 2:02 AM, William Oberman <ob...@gmail.com> wrote:

> Well, this is weird/irritating.  No matter what I do I can't create
> more than certain amount of load with JMeter.  For example, if I run
> one server at full throttle, I might get 75 req/sec.  If I run two
> servers with the same size thread pool, I then get ~37 req/sec.  If I
> run three servers with the same size thread pool, I get 25 req/sec.
> And so on.
>
> I guess this problem is more complicated than I thought without Jmeter
> having a specific feature to generate constant inbound load (or
> dropping connections slower than X seconds, which I think would also
> work)
>
> will
>
> On Mon, Jul 26, 2010 at 1:40 PM, William Oberman <ob...@gmail.com>
> wrote:
> > I found an old email thread about doing constant rate testing in the
> > archives.  I wanted to kick the idea up again, but first I'll review
> > the basic situation, and past advice:
> > -I want to simulate a constant inbound rate, so that if the server
> > falls behind the inbound load keeps coming and crushes the server
> > -Jmeter has a fixed pool size + every thread waits for a response, so
> > jmeter will in effect "slow down to accommodate" the load
> > -The advice from the old thread (my interpretation) was basically find
> > a thread pool size large enough to crush the server
> >
> > The "next step" problem the user in the old thread appeared to be
> > having was the inability to scale Jmeter to "crushing load" levels.  I
> > don't have that problem... I can create thread pools that create
> > crushing load.  But, I'd vastly prefer to create the real world use
> > case of constant inbound load.  I was wondering if there is a clever
> > use of the Constant Throughput Timer (CTT) that will help?
> >
> > I haven't used the CTT much before, but based on the description, it
> > seems promising.   My basic idea was going to be:
> > -Have a thread group with size greater than the "crushing level"
> > -Use the CTT (or CTTs) to throttle the huge thread pool to keep most
> > threads idle
> > -As the server experiences load and slows down, the CTTs will continue
> > to let previously idle threads run (to keep the rate at a certain
> > level), hopefully slowing increasing the load to the crushing point
> >
> > I'm obviously still playing around, trial and error style.  But, I
> > thought I'd check here in case there is a fundamental flaw I'm missing
> > (or an easier approach).  Or is all of this complicated setup == a
> > large thread group + long ramp up period?  I'll try that too...
> >
> > will
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: constant rate testing (again)

Posted by William Oberman <ob...@gmail.com>.
Well, this is weird/irritating.  No matter what I do I can't create
more than certain amount of load with JMeter.  For example, if I run
one server at full throttle, I might get 75 req/sec.  If I run two
servers with the same size thread pool, I then get ~37 req/sec.  If I
run three servers with the same size thread pool, I get 25 req/sec.
And so on.

I guess this problem is more complicated than I thought without Jmeter
having a specific feature to generate constant inbound load (or
dropping connections slower than X seconds, which I think would also
work)

will

On Mon, Jul 26, 2010 at 1:40 PM, William Oberman <ob...@gmail.com> wrote:
> I found an old email thread about doing constant rate testing in the
> archives.  I wanted to kick the idea up again, but first I'll review
> the basic situation, and past advice:
> -I want to simulate a constant inbound rate, so that if the server
> falls behind the inbound load keeps coming and crushes the server
> -Jmeter has a fixed pool size + every thread waits for a response, so
> jmeter will in effect "slow down to accommodate" the load
> -The advice from the old thread (my interpretation) was basically find
> a thread pool size large enough to crush the server
>
> The "next step" problem the user in the old thread appeared to be
> having was the inability to scale Jmeter to "crushing load" levels.  I
> don't have that problem... I can create thread pools that create
> crushing load.  But, I'd vastly prefer to create the real world use
> case of constant inbound load.  I was wondering if there is a clever
> use of the Constant Throughput Timer (CTT) that will help?
>
> I haven't used the CTT much before, but based on the description, it
> seems promising.   My basic idea was going to be:
> -Have a thread group with size greater than the "crushing level"
> -Use the CTT (or CTTs) to throttle the huge thread pool to keep most
> threads idle
> -As the server experiences load and slows down, the CTTs will continue
> to let previously idle threads run (to keep the rate at a certain
> level), hopefully slowing increasing the load to the crushing point
>
> I'm obviously still playing around, trial and error style.  But, I
> thought I'd check here in case there is a fundamental flaw I'm missing
> (or an easier approach).  Or is all of this complicated setup == a
> large thread group + long ramp up period?  I'll try that too...
>
> will
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org