You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Iago Toral Quiroga <it...@igalia.com> on 2005/12/09 10:59:22 UTC

Constant Throughput Timer performance

Hi,

I've configured a test with 100 thread groups (one thread per thread
group) and added a constant throughput timer to get a 10 requests per
second performance. To do so, I configured target throughput to 600
(samples per minute) and selected to compute performance based on all
active threads.

The result is as expected, I get an average throughput of 10 requests
per second, but they are not uniform along the time. What I get is
something like this:

At second 0, jmeter launches 100 requests to the server. At second 4,
jmeter has received all the responses, but because it has lauched 100
requests at second 0, it must wait till second 10 to start another bunch
of 100 requests. What I expect from this kind of tests is getting 10
requests per second *each second*.

This kind of behaviour is much more like a repeated peak test than a
constant troughput test. I know I can get a more uniform test by droping
the thread count so jmeter would have to wait less time to launch the
next bunch of requests, but that is weird and still a trick that does
not solve the point of problem at all ¿I'm missing something?, ¿is there
a way to get a more uniform behaviour for this kind of tests?

Thanks in advance for your help!
-- 
Abel Iago Toral Quiroga	
Igalia http://www.igalia.com

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Constant Throughput Timer performance

Posted by sebb <se...@gmail.com>.
On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> Thanks for your comment sebb,
>
> if I have more than one thread in each thread group my problem is
> ensuring that each thread launches a different request, because each
> thread will send the same sequence of requests under the threadgroup.
> I've tried using an interleave controller, but it deals the requests for
> each thread and not for all the threads in the threadgroup :(

See my reply to the other thread.

Let's close this one now.

> Iago.
>
> El vie, 09-12-2005 a las 18:01, sebb escribió:
> > I suspect part of the problem is that all the threads start at once,
> > and having 100 thread groups with only 1 thread in each will make it
> > tedious to fix - you'll need to add a gradually increasing delay to
> > each of the thread groups.
> > What happens if you have fewer thread groups and more threads in each group?
> > You can set the ramp-up for each thread-group to ensure that the
> > threads start more evenly.
> >
> > S.
> > On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > Hi,
> > >
> > > I've configured a test with 100 thread groups (one thread per thread
> > > group) and added a constant throughput timer to get a 10 requests per
> > > second performance. To do so, I configured target throughput to 600
> > > (samples per minute) and selected to compute performance based on all
> > > active threads.
> > >
> > > The result is as expected, I get an average throughput of 10 requests
> > > per second, but they are not uniform along the time. What I get is
> > > something like this:
> > >
> > > At second 0, jmeter launches 100 requests to the server. At second 4,
> > > jmeter has received all the responses, but because it has lauched 100
> > > requests at second 0, it must wait till second 10 to start another bunch
> > > of 100 requests. What I expect from this kind of tests is getting 10
> > > requests per second *each second*.
> > >
> > > This kind of behaviour is much more like a repeated peak test than a
> > > constant troughput test. I know I can get a more uniform test by droping
> > > the thread count so jmeter would have to wait less time to launch the
> > > next bunch of requests, but that is weird and still a trick that does
> > > not solve the point of problem at all ¿I'm missing something?, ¿is there
> > > a way to get a more uniform behaviour for this kind of tests?
> > >
> > > Thanks in advance for your help!
> > > --
> > > Abel Iago Toral Quiroga
> > > Igalia http://www.igalia.com
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> --
> Abel Iago Toral Quiroga
> Igalia http://www.igalia.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Constant Throughput Timer performance

Posted by sebb <se...@gmail.com>.
On 09/12/05, Peter Lin <wo...@gmail.com> wrote:
> thanks for explaining. that makes sense now. given the application is
> caching, having different requests would be crucial for valid measurement.
> chances are, you'll need to use atleast 4 clients and split the test plan
> into 4 smaller test plans.  this way, it increases the chances that the
> threads will have a shorter delay between each thread.
>
> in the past, when I've had to test applications with cache, we made it so
> the cache can be turned off.  This way, we can test the impact of concurrent
> queries, versus the webserver's ability to handle 100 concurrent requests.
> If your application doesn't have the capability, it's really going to be
> hard to effectively test the impact of traffic spike.

Unless you can add some variability to the URL to ensure that the
cache does not contain the request.

> peter
>
>
> On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> >
> > El vie, 09-12-2005 a las 18:49, Peter Lin escribió:
> > > honestly, I don't understand why first request needs to be different for
> > all
> > > threads.  if the point is to measure an application's ability to handle
> > a
> > > sudden spike, it's better to pick a very heavy page and set 1
> > threadGroup
> > > with 100 threads and hit it.
> >
> > Because the web server is serving a GIS application that has a cache
> > system. So I need all the requests to be different in order to avoid
> > cached responses.
> >
> > > using different thread groups just means you have to ramp for a longer
> > > period. I can't stress enough how hard it is to really get 100
> > concurrent
> > > requests.  From my experience, what matters more is the system is able
> > to
> > > handle a sudden spike gracefully without brinding down the website and
> > > return to normal operation once the spike has passed.  100 concurrent
> > > requests for an average size webpage 10Kb, means that in an ideal
> > situation
> > > one would need a full 100mbit bandwidth. On a 10mbit bandwidth, it's
> > never
> > > going to reach that. It's physically impossible.
> > >
> > > unless the hosting facility has a dedicated OC12, it won't be able to
> > handle
> > > 100 concurrent.  for some perspective, 40 concurrent requests for 18hrs
> > a
> > > day translates to 10million pages views.  I know this from first hand
> > > experience working at superpages.com.  98% of the sites out there don't
> > get
> > > any where near this kind of traffic.
> > >
> >
> > I'm not talking about of being able to serve 100 requests in one second
> > to the clients. What I want to know is what happens at the server when
> > 100 requests appear simultaneouly. Surely I need a huge bandwidth to
> > give response to all those requests, but not to get the requests
> > themselves, which is the point. A http request is very short in size,
> > let's say 500 bytes, so you don't need a huge bandwidth in order to
> > receive them.
> >
> > So, if I can receive 100 simultaneous requests, what will happen to my
> > server? will it crash? will it refuse connections? will it be able to
> > continue working? at which performce? etc... that is what I want to
> > know.
> >
> > > On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > >
> > > > Thanks for your comment sebb,
> > > >
> > > > if I have more than one thread in each thread group my problem is
> > > > ensuring that each thread launches a different request, because each
> > > > thread will send the same sequence of requests under the threadgroup.
> > > > I've tried using an interleave controller, but it deals the requests
> > for
> > > > each thread and not for all the threads in the threadgroup :(
> > > >
> > > > Iago.
> > > >
> > > > El vie, 09-12-2005 a las 18:01, sebb escribió:
> > > > > I suspect part of the problem is that all the threads start at once,
> > > > > and having 100 thread groups with only 1 thread in each will make it
> > > > > tedious to fix - you'll need to add a gradually increasing delay to
> > > > > each of the thread groups.
> > > > > What happens if you have fewer thread groups and more threads in
> > each
> > > > group?
> > > > > You can set the ramp-up for each thread-group to ensure that the
> > > > > threads start more evenly.
> > > > >
> > > > > S.
> > > > > On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > > > > Hi,
> > > > > >
> > > > > > I've configured a test with 100 thread groups (one thread per
> > thread
> > > > > > group) and added a constant throughput timer to get a 10 requests
> > per
> > > > > > second performance. To do so, I configured target throughput to
> > 600
> > > > > > (samples per minute) and selected to compute performance based on
> > all
> > > > > > active threads.
> > > > > >
> > > > > > The result is as expected, I get an average throughput of 10
> > requests
> > > > > > per second, but they are not uniform along the time. What I get is
> > > > > > something like this:
> > > > > >
> > > > > > At second 0, jmeter launches 100 requests to the server. At second
> > 4,
> > > > > > jmeter has received all the responses, but because it has lauched
> > 100
> > > > > > requests at second 0, it must wait till second 10 to start another
> > > > bunch
> > > > > > of 100 requests. What I expect from this kind of tests is getting
> > 10
> > > > > > requests per second *each second*.
> > > > > >
> > > > > > This kind of behaviour is much more like a repeated peak test than
> > a
> > > > > > constant troughput test. I know I can get a more uniform test by
> > > > droping
> > > > > > the thread count so jmeter would have to wait less time to launch
> > the
> > > > > > next bunch of requests, but that is weird and still a trick that
> > does
> > > > > > not solve the point of problem at all ¿I'm missing something?, ¿is
> > > > there
> > > > > > a way to get a more uniform behaviour for this kind of tests?
> > > > > >
> > > > > > Thanks in advance for your help!
> > > > > > --
> > > > > > Abel Iago Toral Quiroga
> > > > > > Igalia http://www.igalia.com
> > > > > >
> > > > > >
> > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > > > For additional commands, e-mail:
> > jmeter-user-help@jakarta.apache.org
> > > > > >
> > > > > >
> > > > >
> > > > >
> > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > > > --
> > > > Abel Iago Toral Quiroga
> > > > Igalia http://www.igalia.com
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > > >
> > > >
> > --
> > Abel Iago Toral Quiroga
> > Igalia http://www.igalia.com
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> >
> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Constant Throughput Timer performance

Posted by Peter Lin <wo...@gmail.com>.
thanks for explaining. that makes sense now. given the application is
caching, having different requests would be crucial for valid measurement.
chances are, you'll need to use atleast 4 clients and split the test plan
into 4 smaller test plans.  this way, it increases the chances that the
threads will have a shorter delay between each thread.

in the past, when I've had to test applications with cache, we made it so
the cache can be turned off.  This way, we can test the impact of concurrent
queries, versus the webserver's ability to handle 100 concurrent requests.
If your application doesn't have the capability, it's really going to be
hard to effectively test the impact of traffic spike.

peter


On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
>
> El vie, 09-12-2005 a las 18:49, Peter Lin escribió:
> > honestly, I don't understand why first request needs to be different for
> all
> > threads.  if the point is to measure an application's ability to handle
> a
> > sudden spike, it's better to pick a very heavy page and set 1
> threadGroup
> > with 100 threads and hit it.
>
> Because the web server is serving a GIS application that has a cache
> system. So I need all the requests to be different in order to avoid
> cached responses.
>
> > using different thread groups just means you have to ramp for a longer
> > period. I can't stress enough how hard it is to really get 100
> concurrent
> > requests.  From my experience, what matters more is the system is able
> to
> > handle a sudden spike gracefully without brinding down the website and
> > return to normal operation once the spike has passed.  100 concurrent
> > requests for an average size webpage 10Kb, means that in an ideal
> situation
> > one would need a full 100mbit bandwidth. On a 10mbit bandwidth, it's
> never
> > going to reach that. It's physically impossible.
> >
> > unless the hosting facility has a dedicated OC12, it won't be able to
> handle
> > 100 concurrent.  for some perspective, 40 concurrent requests for 18hrs
> a
> > day translates to 10million pages views.  I know this from first hand
> > experience working at superpages.com.  98% of the sites out there don't
> get
> > any where near this kind of traffic.
> >
>
> I'm not talking about of being able to serve 100 requests in one second
> to the clients. What I want to know is what happens at the server when
> 100 requests appear simultaneouly. Surely I need a huge bandwidth to
> give response to all those requests, but not to get the requests
> themselves, which is the point. A http request is very short in size,
> let's say 500 bytes, so you don't need a huge bandwidth in order to
> receive them.
>
> So, if I can receive 100 simultaneous requests, what will happen to my
> server? will it crash? will it refuse connections? will it be able to
> continue working? at which performce? etc... that is what I want to
> know.
>
> > On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > >
> > > Thanks for your comment sebb,
> > >
> > > if I have more than one thread in each thread group my problem is
> > > ensuring that each thread launches a different request, because each
> > > thread will send the same sequence of requests under the threadgroup.
> > > I've tried using an interleave controller, but it deals the requests
> for
> > > each thread and not for all the threads in the threadgroup :(
> > >
> > > Iago.
> > >
> > > El vie, 09-12-2005 a las 18:01, sebb escribió:
> > > > I suspect part of the problem is that all the threads start at once,
> > > > and having 100 thread groups with only 1 thread in each will make it
> > > > tedious to fix - you'll need to add a gradually increasing delay to
> > > > each of the thread groups.
> > > > What happens if you have fewer thread groups and more threads in
> each
> > > group?
> > > > You can set the ramp-up for each thread-group to ensure that the
> > > > threads start more evenly.
> > > >
> > > > S.
> > > > On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > > > Hi,
> > > > >
> > > > > I've configured a test with 100 thread groups (one thread per
> thread
> > > > > group) and added a constant throughput timer to get a 10 requests
> per
> > > > > second performance. To do so, I configured target throughput to
> 600
> > > > > (samples per minute) and selected to compute performance based on
> all
> > > > > active threads.
> > > > >
> > > > > The result is as expected, I get an average throughput of 10
> requests
> > > > > per second, but they are not uniform along the time. What I get is
> > > > > something like this:
> > > > >
> > > > > At second 0, jmeter launches 100 requests to the server. At second
> 4,
> > > > > jmeter has received all the responses, but because it has lauched
> 100
> > > > > requests at second 0, it must wait till second 10 to start another
> > > bunch
> > > > > of 100 requests. What I expect from this kind of tests is getting
> 10
> > > > > requests per second *each second*.
> > > > >
> > > > > This kind of behaviour is much more like a repeated peak test than
> a
> > > > > constant troughput test. I know I can get a more uniform test by
> > > droping
> > > > > the thread count so jmeter would have to wait less time to launch
> the
> > > > > next bunch of requests, but that is weird and still a trick that
> does
> > > > > not solve the point of problem at all ¿I'm missing something?, ¿is
> > > there
> > > > > a way to get a more uniform behaviour for this kind of tests?
> > > > >
> > > > > Thanks in advance for your help!
> > > > > --
> > > > > Abel Iago Toral Quiroga
> > > > > Igalia http://www.igalia.com
> > > > >
> > > > >
> ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > > For additional commands, e-mail:
> jmeter-user-help@jakarta.apache.org
> > > > >
> > > > >
> > > >
> > > >
> ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > > --
> > > Abel Iago Toral Quiroga
> > > Igalia http://www.igalia.com
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > >
> > >
> --
> Abel Iago Toral Quiroga
> Igalia http://www.igalia.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: Constant Throughput Timer performance

Posted by Iago Toral Quiroga <it...@igalia.com>.
El vie, 09-12-2005 a las 18:49, Peter Lin escribió:
> honestly, I don't understand why first request needs to be different for all
> threads.  if the point is to measure an application's ability to handle a
> sudden spike, it's better to pick a very heavy page and set 1 threadGroup
> with 100 threads and hit it.

Because the web server is serving a GIS application that has a cache
system. So I need all the requests to be different in order to avoid
cached responses.

> using different thread groups just means you have to ramp for a longer
> period. I can't stress enough how hard it is to really get 100 concurrent
> requests.  From my experience, what matters more is the system is able to
> handle a sudden spike gracefully without brinding down the website and
> return to normal operation once the spike has passed.  100 concurrent
> requests for an average size webpage 10Kb, means that in an ideal situation
> one would need a full 100mbit bandwidth. On a 10mbit bandwidth, it's never
> going to reach that. It's physically impossible.
> 
> unless the hosting facility has a dedicated OC12, it won't be able to handle
> 100 concurrent.  for some perspective, 40 concurrent requests for 18hrs a
> day translates to 10million pages views.  I know this from first hand
> experience working at superpages.com.  98% of the sites out there don't get
> any where near this kind of traffic.
> 

I'm not talking about of being able to serve 100 requests in one second
to the clients. What I want to know is what happens at the server when
100 requests appear simultaneouly. Surely I need a huge bandwidth to
give response to all those requests, but not to get the requests
themselves, which is the point. A http request is very short in size,
let's say 500 bytes, so you don't need a huge bandwidth in order to
receive them. 

So, if I can receive 100 simultaneous requests, what will happen to my
server? will it crash? will it refuse connections? will it be able to
continue working? at which performce? etc... that is what I want to
know.

> On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> >
> > Thanks for your comment sebb,
> >
> > if I have more than one thread in each thread group my problem is
> > ensuring that each thread launches a different request, because each
> > thread will send the same sequence of requests under the threadgroup.
> > I've tried using an interleave controller, but it deals the requests for
> > each thread and not for all the threads in the threadgroup :(
> >
> > Iago.
> >
> > El vie, 09-12-2005 a las 18:01, sebb escribió:
> > > I suspect part of the problem is that all the threads start at once,
> > > and having 100 thread groups with only 1 thread in each will make it
> > > tedious to fix - you'll need to add a gradually increasing delay to
> > > each of the thread groups.
> > > What happens if you have fewer thread groups and more threads in each
> > group?
> > > You can set the ramp-up for each thread-group to ensure that the
> > > threads start more evenly.
> > >
> > > S.
> > > On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > > Hi,
> > > >
> > > > I've configured a test with 100 thread groups (one thread per thread
> > > > group) and added a constant throughput timer to get a 10 requests per
> > > > second performance. To do so, I configured target throughput to 600
> > > > (samples per minute) and selected to compute performance based on all
> > > > active threads.
> > > >
> > > > The result is as expected, I get an average throughput of 10 requests
> > > > per second, but they are not uniform along the time. What I get is
> > > > something like this:
> > > >
> > > > At second 0, jmeter launches 100 requests to the server. At second 4,
> > > > jmeter has received all the responses, but because it has lauched 100
> > > > requests at second 0, it must wait till second 10 to start another
> > bunch
> > > > of 100 requests. What I expect from this kind of tests is getting 10
> > > > requests per second *each second*.
> > > >
> > > > This kind of behaviour is much more like a repeated peak test than a
> > > > constant troughput test. I know I can get a more uniform test by
> > droping
> > > > the thread count so jmeter would have to wait less time to launch the
> > > > next bunch of requests, but that is weird and still a trick that does
> > > > not solve the point of problem at all ¿I'm missing something?, ¿is
> > there
> > > > a way to get a more uniform behaviour for this kind of tests?
> > > >
> > > > Thanks in advance for your help!
> > > > --
> > > > Abel Iago Toral Quiroga
> > > > Igalia http://www.igalia.com
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > --
> > Abel Iago Toral Quiroga
> > Igalia http://www.igalia.com
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> >
> >
-- 
Abel Iago Toral Quiroga	
Igalia http://www.igalia.com

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Constant Throughput Timer performance

Posted by Peter Lin <wo...@gmail.com>.
honestly, I don't understand why first request needs to be different for all
threads.  if the point is to measure an application's ability to handle a
sudden spike, it's better to pick a very heavy page and set 1 threadGroup
with 100 threads and hit it.

using different thread groups just means you have to ramp for a longer
period. I can't stress enough how hard it is to really get 100 concurrent
requests.  From my experience, what matters more is the system is able to
handle a sudden spike gracefully without brinding down the website and
return to normal operation once the spike has passed.  100 concurrent
requests for an average size webpage 10Kb, means that in an ideal situation
one would need a full 100mbit bandwidth. On a 10mbit bandwidth, it's never
going to reach that. It's physically impossible.

unless the hosting facility has a dedicated OC12, it won't be able to handle
100 concurrent.  for some perspective, 40 concurrent requests for 18hrs a
day translates to 10million pages views.  I know this from first hand
experience working at superpages.com.  98% of the sites out there don't get
any where near this kind of traffic.

peter


On 12/9/05, Iago Toral Quiroga <it...@igalia.com> wrote:
>
> Thanks for your comment sebb,
>
> if I have more than one thread in each thread group my problem is
> ensuring that each thread launches a different request, because each
> thread will send the same sequence of requests under the threadgroup.
> I've tried using an interleave controller, but it deals the requests for
> each thread and not for all the threads in the threadgroup :(
>
> Iago.
>
> El vie, 09-12-2005 a las 18:01, sebb escribió:
> > I suspect part of the problem is that all the threads start at once,
> > and having 100 thread groups with only 1 thread in each will make it
> > tedious to fix - you'll need to add a gradually increasing delay to
> > each of the thread groups.
> > What happens if you have fewer thread groups and more threads in each
> group?
> > You can set the ramp-up for each thread-group to ensure that the
> > threads start more evenly.
> >
> > S.
> > On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > > Hi,
> > >
> > > I've configured a test with 100 thread groups (one thread per thread
> > > group) and added a constant throughput timer to get a 10 requests per
> > > second performance. To do so, I configured target throughput to 600
> > > (samples per minute) and selected to compute performance based on all
> > > active threads.
> > >
> > > The result is as expected, I get an average throughput of 10 requests
> > > per second, but they are not uniform along the time. What I get is
> > > something like this:
> > >
> > > At second 0, jmeter launches 100 requests to the server. At second 4,
> > > jmeter has received all the responses, but because it has lauched 100
> > > requests at second 0, it must wait till second 10 to start another
> bunch
> > > of 100 requests. What I expect from this kind of tests is getting 10
> > > requests per second *each second*.
> > >
> > > This kind of behaviour is much more like a repeated peak test than a
> > > constant troughput test. I know I can get a more uniform test by
> droping
> > > the thread count so jmeter would have to wait less time to launch the
> > > next bunch of requests, but that is weird and still a trick that does
> > > not solve the point of problem at all ¿I'm missing something?, ¿is
> there
> > > a way to get a more uniform behaviour for this kind of tests?
> > >
> > > Thanks in advance for your help!
> > > --
> > > Abel Iago Toral Quiroga
> > > Igalia http://www.igalia.com
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> --
> Abel Iago Toral Quiroga
> Igalia http://www.igalia.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

Re: Constant Throughput Timer performance

Posted by Iago Toral Quiroga <it...@igalia.com>.
Thanks for your comment sebb,

if I have more than one thread in each thread group my problem is
ensuring that each thread launches a different request, because each
thread will send the same sequence of requests under the threadgroup.
I've tried using an interleave controller, but it deals the requests for
each thread and not for all the threads in the threadgroup :(

Iago.

El vie, 09-12-2005 a las 18:01, sebb escribió:
> I suspect part of the problem is that all the threads start at once,
> and having 100 thread groups with only 1 thread in each will make it
> tedious to fix - you'll need to add a gradually increasing delay to
> each of the thread groups.
> What happens if you have fewer thread groups and more threads in each group?
> You can set the ramp-up for each thread-group to ensure that the
> threads start more evenly.
> 
> S.
> On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> > Hi,
> >
> > I've configured a test with 100 thread groups (one thread per thread
> > group) and added a constant throughput timer to get a 10 requests per
> > second performance. To do so, I configured target throughput to 600
> > (samples per minute) and selected to compute performance based on all
> > active threads.
> >
> > The result is as expected, I get an average throughput of 10 requests
> > per second, but they are not uniform along the time. What I get is
> > something like this:
> >
> > At second 0, jmeter launches 100 requests to the server. At second 4,
> > jmeter has received all the responses, but because it has lauched 100
> > requests at second 0, it must wait till second 10 to start another bunch
> > of 100 requests. What I expect from this kind of tests is getting 10
> > requests per second *each second*.
> >
> > This kind of behaviour is much more like a repeated peak test than a
> > constant troughput test. I know I can get a more uniform test by droping
> > the thread count so jmeter would have to wait less time to launch the
> > next bunch of requests, but that is weird and still a trick that does
> > not solve the point of problem at all ¿I'm missing something?, ¿is there
> > a way to get a more uniform behaviour for this kind of tests?
> >
> > Thanks in advance for your help!
> > --
> > Abel Iago Toral Quiroga
> > Igalia http://www.igalia.com
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> >
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
-- 
Abel Iago Toral Quiroga	
Igalia http://www.igalia.com

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Constant Throughput Timer performance

Posted by sebb <se...@gmail.com>.
I suspect part of the problem is that all the threads start at once,
and having 100 thread groups with only 1 thread in each will make it
tedious to fix - you'll need to add a gradually increasing delay to
each of the thread groups.

What happens if you have fewer thread groups and more threads in each group?
You can set the ramp-up for each thread-group to ensure that the
threads start more evenly.

S.
On 09/12/05, Iago Toral Quiroga <it...@igalia.com> wrote:
> Hi,
>
> I've configured a test with 100 thread groups (one thread per thread
> group) and added a constant throughput timer to get a 10 requests per
> second performance. To do so, I configured target throughput to 600
> (samples per minute) and selected to compute performance based on all
> active threads.
>
> The result is as expected, I get an average throughput of 10 requests
> per second, but they are not uniform along the time. What I get is
> something like this:
>
> At second 0, jmeter launches 100 requests to the server. At second 4,
> jmeter has received all the responses, but because it has lauched 100
> requests at second 0, it must wait till second 10 to start another bunch
> of 100 requests. What I expect from this kind of tests is getting 10
> requests per second *each second*.
>
> This kind of behaviour is much more like a repeated peak test than a
> constant troughput test. I know I can get a more uniform test by droping
> the thread count so jmeter would have to wait less time to launch the
> next bunch of requests, but that is weird and still a trick that does
> not solve the point of problem at all ¿I'm missing something?, ¿is there
> a way to get a more uniform behaviour for this kind of tests?
>
> Thanks in advance for your help!
> --
> Abel Iago Toral Quiroga
> Igalia http://www.igalia.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org