You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@jmeter.apache.org by Kirk Pepperdine <ki...@gmail.com> on 2013/09/27 21:06:51 UTC

Re: Coordinated Omission

Hi,

We've had some further discussions about CO and it seems that there are a number of other sources for CO that go beyond what JMeter already contributes to the problem. This means that even threads operating at low levels of activity can suffer from CO. One thing that could make things better is a completely asynchronous testing regime. However to implement this would require using JMeter in even more twisted ways that I already use or a fix in JMeter that is even more drastic than the one I originally proposed. In other words, unlikely to happen.

The focus for some of the discussions we had at JavaONE on how to measure dealt with how to correct for CO. If we can accept that CO is a normal part of any bench, then a more pragmatic approach would be to correct for CO. One of the fixes proposed is to start the latency timing for a request from when it should have been triggered. That would give a better picture of how your app is performing rather than reporting on realized latency of a transaction. To do this one would have to recalculate the expected trigger times and I'm currently not sure how to do that. It needs a bit more thought.

I think as a start it would be interesting to experiment with a new listener that was more aware of the contents of a test plan.

Regards,
Kirk


On 2013-08-06, at 5:12 AM, sebb <se...@gmail.com> wrote:

> On 6 August 2013 06:42, Kirk Pepperdine <ki...@gmail.com> wrote:
>> Hi Sebb,
>> 
>> On 2013-08-06, at 2:54 AM, sebb <se...@gmail.com> wrote:
>> 
>>> On 3 August 2013 21:45, Kirk Pepperdine <ki...@gmail.com> wrote:
>>>> Hi all,
>>>> 
>>>> To all that maybe interested. Gil Tene (from Azul Systems) has started a thread on Coordinated Omissions. it's his name for a problem that I noted in JMeter a while back and for which the current fix is the start threads on demand check box. The problems Gil describes go much deeper than the problems that are due to JMeters current threading model and lack of event based schedulers (which would provide at least a 10x boost in scalability and make it less likely that JMeter will work to hide bottlenecks in your application). The effect of CO is that the test harness becomes like a ref making a bad call at a critical point in the game.
>>> 
>>> This is dependent on the test load; not all tests are affected by the problem.
>> 
>> I would agree that not all load tests are affected by the problem though IME, most are.
>>> 
>>>> Anyways, I shan't repeat the contents of the thread here. You can find it at mechanical-sympathy@googlegroups.com, a mailing list started by Martin Thompson, well known for his work on the Disruptor framework.
>>> 
>>> Interesting thread.
>>> As it points out this affects many testing systems; JMeter is not
>>> alone in being affected.
>> 
>> Absolutely true, never intended to suggest that it was. Of the load testing tools out there, JMeter continues to rank very highly because of it's reachability. I can teach a group of people how to  execute rich load tests using JMeter in about an hour. This is not true of any of the other tooling that is out there.
>>> 
>>> As I understand it, the problem occurs when one or more samples are
>>> delayed because the previous sample did not complete in time.
>>> Assuming that the slowdown is caused by the system under test (SUT),
>>> the samples that cannot be sent are also likely to have taken longer
>>> than usual.
>>> So one overlong sample response can hide several others that would
>>> have occurred, thus affecting the statistics, particularly the high
>>> percentile stats.
>> 
>> I think the premise of the discussion is that something happens in the harness that prevents it from firing when it should.
> 
> Yes.
> 
>> WIth JMeter having a long response time from the app would also contribute to this problem. In JMeter, a change in the threading model would help prevent this problem and would make JMeter far more scalable. Currently what happens is a thread picks up a script (ThreadGroup) and executes it. The problem is that the script is a far too granular unit of work to give to a thread. That problem becomes magnified with you loop over the script.
> 
> Only if the thread is held up by a long response.
> 
>> This created the 1 thread 1 user dependency that I've mentioned in the past. It is this dependency that IMHO, limits the scalability of JMeter. To break this dependency one would need to offer the thread a much less granular task than an entire script. So, instead of getting a script what is a thread only got a sampler. What if the samplers were in an event heap that was sorted by the time at which that sampler was to be triggered. Threads would then pick up the event, fire it and then calculate and inject the heap with the next event as directed by the script. This way instead of having a thread tied down it could be made available to trigger the next sample. This would allow the test to maintain a much more consistent load on the server by making it less likely that  the test is throttled by side effects in the load injector.
> 
> Perhaps, but as already discussed previously, this would mean
> rewriting large parts of JMeter.
> Third party plugins would probably also be affected.
> 
>>> 
>>> To avoid this, we need to ensure that long sample responses do not
>>> prevent the generation of the next sample at the correct time.
>>> 
>>> There are a couple of ways to mitigate this with the current JMeter
>>> design (which waits for a sample to complete before continuing with
>>> the next).
>>> 
>>> 1) Ensure that each thread only needs to send requests at a relatively
>>> low rate, so that slow responses do not use up all the wait time.
>>> This may not be possible with a single JMeter instance, in which case
>>> use one instance to create the bulk of the load (ignoring the issue of
>>> delayed requests), and a second to measure the sample times. This
>>> second instance should have a low transaction rate per thread so slow
>>> responses don't affect the generated load, and should be used to
>>> derive the statistics.
>> 
>> CO can occur under low loads as well as high ones.
> 
> AIUI, it can only occur if the reponse time is sufficiently long to
> delay the next sample.
> The second thread would have to have plenty of slack wait-time to
> allow for this.
> This implies a low transaction rate per thread.
> 
> E.g. if the maximum response time is 60 seconds, each thread would
> have to issue samples at less than 1 per minute.
> However there can be multiple such threads to increase the measurement
> rate in the second JMeter instance.
> 
>>> 
>>> 2) Use timeouts to abandon slow responses so that they cannot cause a
>>> slow down (this might not always be suitable).
>> 
>> CO is a reporting error. By omitting the data point you will have only further contributed to CO.
> 
> Yes.
> 
> But choosing a timeout that corresponds to the maximum allowed
> response time, this might not matter.
> For HTML tests, it's quite likely that users will abandon the request
> if it exceeds a certain threshold.
> 
>>> 
>>> It should be fairly obvious from the test timings (particularly the
>>> transaction rate) whether this has happened.
>> 
>> IME it's not been obvious that this has been happening unless you are inclined to dig deeper into what is happening with your load test. Many of the effects are subtile and often can only been when you use a histogram or some other visualization. Using an average will only bury this effect.
> 
> If the required transaction rate is not generated, then obviously
> something has gone wrong which requires further investigation.
> 
>>> 
>>> It should still be possible to draw some useful conclusions about the
>>> behaviour of the SUT, e.g. what load starts to cause response
>>> slowdown; does the SUT start misbehaving in other ways (reporting
>>> errors etc).
>> 
>> I'd agree that you can still draw useful conclusions from some very flawed benchmarks but the real problem is, how do you know when you can and when you shouldn't use the results from a bench?
> 
> Maybe there are some simple measures that JMeter could implement that
> would help show when the problem has occurred.
> For example, the Constant Throughput Timer could perhaps report if it
> was unable to maintain the required rate?
> 
>> The question has many subtile implications and I've seen teams get it wrong in a few cases it almost resulted in the project being terminated.
>> 
>> Anyways, ever since I've changed all of my testing to never loop I've been finding it much easier to reliably load an application using JMeter.
>> 
>> Regards,
>> Kirk
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
>> For additional commands, e-mail: user-help@jmeter.apache.org
>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org