You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Joran Greef <jo...@sexbyfood.com> on 2009/05/18 14:22:49 UTC
Concurrent Requests From Multiple Clients For The Same Resource
Hi everyone,
I opened up several Rhino shells this morning and ran the following
code from each of them at the same time:
var test = function () {
for (var index = 0; index < 40; index++) {
var start = new Date().getTime(), options = {output:"",
err:""};
runCommand("curl", "http://127.0.0.1:5984/tables/_all_docs?include_docs=true
", options);
print((new Date().getTime() - start) + "ms");
}
};
test();
It causes each Rhino shell to make 40 requests to Couch and for each
displays the time taken to complete the request. I gave the first
shell a head start, then started test() in another and so on, until 4
shells were making requests concurrently.
The first couple requests in the first shell took 500ms on average to
retrieve 527kb. But I was surprised to see that as each of the other
shells started kicking in and making requests, the average response
time doubled accordingly and grew from +/-500ms to +/-1000ms to
+/-1500ms to +/-2000ms for all 4 shells as if Couch was queueing the
concurrent requests and handling them in serial.
Couch stats reported an average of 400ms response time (excluding
Mochiweb) for the duration of the test. Could it be that while Couch
can handle concurrent requests in parallel, Mochiweb cannot and blocks?
Thanks,
Joran Greef
Re: Concurrent Requests From Multiple Clients For The Same Resource
Posted by Joran Greef <jo...@sexbyfood.com>.
Hi Adam,
Great, thanks for your quick reply.
I'm running 0.9.0 so that would explain the response times. And good
news to hear about the upcoming JSON encoding improvements. Thanks for
the link to Paul's message, I would help out but need to get more
familiar with SpiderMonkey since I've been using Rhino and only just
recently started with that.
Looking forward to 0.9.1.
Thanks Joran
On 18 May 2009, at 3:09 PM, Adam Kocoloski wrote:
Hi Joran, can I ask what version of CouchDB you are running? There's
a bug in 0.9.0 that causes it to report incorrect (too low) response
times with concurrent requests. The bug is fixed in trunk and will
also be fixed in the 0.9.1 release.
When I do this test on trunk I get CouchDB reporting mean response
times in the 1500ms range, in agreement with what you see in Rhino.
Now, as far as why CouchDB slows down so much. The request you're
making in this test requires a good bit of JSON marshaling. The BEAM
process on my laptop was using a steady ~140% of the CPU while
handling those 4 simultaneous connections, and it would've taken more
if the clients weren't each grabbing ~10%.
There's definitely some work coming down the pipe to improve JSON
encoding efficiency. In fact, if you feel like getting involved you
could test out Paul Davis' new experimental work on this front:
http://mail-archives.apache.org/mod_mbox/couchdb-dev/200905.mbox/%3ce2111bbb0905171801o249f8b99w4e6f92ae2d18ad54@mail.gmail.com%3e
Cheers, Adam
On May 18, 2009, at 8:22 AM, Joran Greef wrote:
> Hi everyone,
>
> I opened up several Rhino shells this morning and ran the following
> code from each of them at the same time:
>
> var test = function () {
> for (var index = 0; index < 40; index++) {
> var start = new Date().getTime(), options = {output:"", err:""};
> runCommand("curl", "http://127.0.0.1:5984/tables/_all_docs?include_docs=true
> ", options);
> print((new Date().getTime() - start) + "ms");
> }
> };
> test();
>
> It causes each Rhino shell to make 40 requests to Couch and for each
> displays the time taken to complete the request. I gave the first
> shell a head start, then started test() in another and so on, until
> 4 shells were making requests concurrently.
>
> The first couple requests in the first shell took 500ms on average
> to retrieve 527kb. But I was surprised to see that as each of the
> other shells started kicking in and making requests, the average
> response time doubled accordingly and grew from +/-500ms to
> +/-1000ms to +/-1500ms to +/-2000ms for all 4 shells as if Couch was
> queueing the concurrent requests and handling them in serial.
>
> Couch stats reported an average of 400ms response time (excluding
> Mochiweb) for the duration of the test. Could it be that while Couch
> can handle concurrent requests in parallel, Mochiweb cannot and
> blocks?
>
> Thanks,
>
> Joran Greef
Re: Concurrent Requests From Multiple Clients For The Same Resource
Posted by Adam Kocoloski <ko...@apache.org>.
Hi Joran, can I ask what version of CouchDB you are running? There's
a bug in 0.9.0 that causes it to report incorrect (too low) response
times with concurrent requests. The bug is fixed in trunk and will
also be fixed in the 0.9.1 release.
When I do this test on trunk I get CouchDB reporting mean response
times in the 1500ms range, in agreement with what you see in Rhino.
Now, as far as why CouchDB slows down so much. The request you're
making in this test requires a good bit of JSON marshaling. The BEAM
process on my laptop was using a steady ~140% of the CPU while
handling those 4 simultaneous connections, and it would've taken more
if the clients weren't each grabbing ~10%.
There's definitely some work coming down the pipe to improve JSON
encoding efficiency. In fact, if you feel like getting involved you
could test out Paul Davis' new experimental work on this front:
http://mail-archives.apache.org/mod_mbox/couchdb-dev/200905.mbox/%3ce2111bbb0905171801o249f8b99w4e6f92ae2d18ad54@mail.gmail.com%3e
Cheers, Adam
On May 18, 2009, at 8:22 AM, Joran Greef wrote:
> Hi everyone,
>
> I opened up several Rhino shells this morning and ran the following
> code from each of them at the same time:
>
> var test = function () {
> for (var index = 0; index < 40; index++) {
> var start = new Date().getTime(), options = {output:"",
> err:""};
> runCommand("curl", "http://127.0.0.1:5984/tables/_all_docs?include_docs=true
> ", options);
> print((new Date().getTime() - start) + "ms");
> }
> };
> test();
>
> It causes each Rhino shell to make 40 requests to Couch and for each
> displays the time taken to complete the request. I gave the first
> shell a head start, then started test() in another and so on, until
> 4 shells were making requests concurrently.
>
> The first couple requests in the first shell took 500ms on average
> to retrieve 527kb. But I was surprised to see that as each of the
> other shells started kicking in and making requests, the average
> response time doubled accordingly and grew from +/-500ms to
> +/-1000ms to +/-1500ms to +/-2000ms for all 4 shells as if Couch was
> queueing the concurrent requests and handling them in serial.
>
> Couch stats reported an average of 400ms response time (excluding
> Mochiweb) for the duration of the test. Could it be that while Couch
> can handle concurrent requests in parallel, Mochiweb cannot and
> blocks?
>
> Thanks,
>
> Joran Greef