You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@couchdb.apache.org by Paul Davis <pa...@gmail.com> on 2011/08/09 08:43:56 UTC

Futon Test Suite

I've been running the Futon test suite quite a bit lately and I've
noticed that they starting to take quite a bit longer again.

The entire test suite takes about 4 minutes to run on a semi recent
MBA. Most of the tests are fairly speedy, but four of them stick out
quite drastically:

delayed_commits: 15s
design_docs: 25s
replication: 90s
replcaitor_db: 60s

I haven't dug into these too much yet. The only thing I've noticed is
that replication and relplicator_db seem to spend a lot of their time
polling various URLs waiting for task completion. Perhaps I'm just
being impatient but it seems that each poll lasts an uncessarily long
time for a unit tests (3-5s) so I was wondering if we were hitting a
timeout or something.

If anyone wants to dig into the test suite and see if they can't speed
these up or even just split them apart so we know what's taking awhile
that'd be super awesome.

Also, I've been thinking more and more about beefing up the JavaScript
test suite runner and moving more of our browser tests over to
dedicated code in those tests. If anyone's interested in hacking on
some C and JavaScript against an HTTP API, let me know.

Re: Futon Test Suite

Posted by Robert Newson <rn...@apache.org>.
Well, there are some cases where we need to work around browser (I
mean, IE) bugs. This mechanism would be generic, so folks stuck
supporting antique browsers have a way to fix themselves. For example,
caching /_session is apparently the cause of login problems on
antediluvian browsers.

B.

On 11 August 2011 08:37, Paul Davis <pa...@gmail.com> wrote:
> Seems like that'd just end up eating its way into the code base
> special casing all the various places we set headers.
>
> On the other hand if we had a command line test runner and could very
> specifically set headers without the confounding effects of running in
> a browser this wouldn't even be an issue.
>
> On Thu, Aug 11, 2011 at 2:29 AM, Robert Newson <rn...@apache.org> wrote:
>> I wonder if we should add a custom request header like
>> X-CouchDB-NoCache which sets all the cachebusting headers (Expires in
>> the paste, etc) rather than that hack.
>>
>> On 11 August 2011 02:15, Filipe David Manana <fd...@apache.org> wrote:
>>> On Wed, Aug 10, 2011 at 5:58 PM, Paul Davis <pa...@gmail.com> wrote:
>>>>
>>>> Since no one seems to have believed me I decided to take a closer look
>>>
>>> I believe you, and in my machine, replication.js, takes about 120ms.
>>>
>>>> at replication.js tests. And as I pointed out it was just polling a
>>>> URL in a tight loop for 3s at a time. On my machine, this patch drops
>>>> replication.js from 93329ms to 41785ms
>>>
>>> That's awesome. If it doesn't make assertions fails for others, go ahead.
>>>
>>> One thing I noticed in the past is that the browser seems to cache the
>>> results of db.info() call. A solution (that is employed somewhere
>>> else, but for another request) is to add some random parameter to the
>>> query string, like  /db?anticache=Math.random(1000000).
>>>
>>>>. I'll point out that that's
>>>> more than twice as fast. And that was just an obvious optimization
>>>> from watching the log scroll. There are plenty more simple things that
>>>> could be done to speed these up.
>>>>
>>>> Also, this patch makes me think that a _replication/localid -> JSON
>>>> status blob might be useful. Though I dunno how possible that is. I
>>>> reckon if we had that these would be sped up even more.
>>>>
>>>>
>>>> diff --git a/share/www/script/couch.js b/share/www/script/couch.js
>>>> index 304c9c1..792e638 100644
>>>> --- a/share/www/script/couch.js
>>>> +++ b/share/www/script/couch.js
>>>> @@ -40,6 +40,8 @@ function CouchDB(name, httpHeaders) {
>>>>     if (this.last_req.status == 404) {
>>>>       return false;
>>>>     }
>>>> +    var t0 = new Date();
>>>> +    while(true) {if((new Date()) - t0 > 100) break;}
>>>>     CouchDB.maybeThrowError(this.last_req);
>>>>     return JSON.parse(this.last_req.responseText);
>>>>   };
>>>> diff --git a/share/www/script/test/replication.js
>>>> b/share/www/script/test/replication.js
>>>> index 65c5eaa..b82375a 100644
>>>> --- a/share/www/script/test/replication.js
>>>> +++ b/share/www/script/test/replication.js
>>>> @@ -149,24 +149,40 @@ couchTests.replication = function(debug) {
>>>>   }
>>>>
>>>>
>>>> -  function waitForSeq(sourceDb, targetDb) {
>>>> -    var targetSeq,
>>>> -        sourceSeq = sourceDb.info().update_seq,
>>>> +  function waitForSeq(sourceDb, targetDb, rep_id) {
>>>> +    var seq = sourceDb.info().update_seq,
>>>> +        ri = new RegExp(rep_id),
>>>> +        tasks,
>>>>         t0 = new Date(),
>>>>         t1,
>>>>         ms = 3000;
>>>>
>>>>     do {
>>>> -      targetSeq = targetDb.info().update_seq;
>>>> +      tasks = JSON.parse(CouchDB.request("GET",
>>>> "/_active_tasks").responseText);
>>>> +      for(var i = 0; i < tasks.length; i++) {
>>>> +        if(!ri.test(tasks[i].task)) continue;
>>>> +        var captured = /Processed (\d+)/.exec(tasks[i].status);
>>>> +        if(parseInt(captured[1]) >= seq) return;
>>>> +        break;
>>>> +      }
>>>>       t1 = new Date();
>>>> -    } while (((t1 - t0) <= ms) && targetSeq < sourceSeq);
>>>> +    } while ((t1 - t0) <= ms);
>>>>   }
>>>>
>>>> +  function waitForRepEnd(rep_id) {
>>>> +    var ri = new RegExp(rep_id),
>>>> +        tasks,
>>>> +        t0 = new Date(),
>>>> +        t1,
>>>> +        ms = 3000;
>>>>
>>>> -  function wait(ms) {
>>>> -    var t0 = new Date(), t1;
>>>>     do {
>>>> -      CouchDB.request("GET", "/");
>>>> +      tasks = JSON.parse(CouchDB.request("GET",
>>>> "/_active_tasks").responseText);
>>>> +      var found = false;
>>>> +      for(var i = 0; i < tasks.length; i++) {
>>>> +        if(!ri.test(tasks[i].task)) found = true;
>>>> +      }
>>>> +      if(!found) return;
>>>>       t1 = new Date();
>>>>     } while ((t1 - t0) <= ms);
>>>>   }
>>>> @@ -1143,7 +1159,7 @@ couchTests.replication = function(debug) {
>>>>
>>>>     var rep_id = repResult._local_id;
>>>>
>>>> -    waitForSeq(sourceDb, targetDb);
>>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>>
>>>>     for (j = 0; j < docs.length; j++) {
>>>>       doc = docs[j];
>>>> @@ -1181,7 +1197,7 @@ couchTests.replication = function(debug) {
>>>>     var ddoc = docs[docs.length - 1]; // design doc
>>>>     addAtt(sourceDb, ddoc, "readme.txt", att1_data, "text/plain");
>>>>
>>>> -    waitForSeq(sourceDb, targetDb);
>>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>>
>>>>     var modifDocs = docs.slice(10, 15).concat([ddoc]);
>>>>     for (j = 0; j < modifDocs.length; j++) {
>>>> @@ -1226,7 +1242,7 @@ couchTests.replication = function(debug) {
>>>>     // add another attachment to the ddoc on source
>>>>     addAtt(sourceDb, ddoc, "data.dat", att2_data, "application/binary");
>>>>
>>>> -    waitForSeq(sourceDb, targetDb);
>>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>>
>>>>     copy = targetDb.open(ddoc._id);
>>>>     var atts = copy._attachments;
>>>> @@ -1263,7 +1279,7 @@ couchTests.replication = function(debug) {
>>>>     var newDocs = makeDocs(25, 35);
>>>>     populateDb(sourceDb, newDocs, true);
>>>>
>>>> -    waitForSeq(sourceDb, targetDb);
>>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>>
>>>>     for (j = 0; j < newDocs.length; j++) {
>>>>       doc = newDocs[j];
>>>> @@ -1282,7 +1298,7 @@ couchTests.replication = function(debug) {
>>>>     TEquals(true, sourceDb.deleteDoc(newDocs[0]).ok);
>>>>     TEquals(true, sourceDb.deleteDoc(newDocs[6]).ok);
>>>>
>>>> -    waitForSeq(sourceDb, targetDb);
>>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>>
>>>>     copy = targetDb.open(newDocs[0]._id);
>>>>     TEquals(null, copy);
>>>> @@ -1317,7 +1333,7 @@ couchTests.replication = function(debug) {
>>>>     };
>>>>     TEquals(true, sourceDb.save(doc).ok);
>>>>
>>>> -    wait(2000);
>>>> +    waitForRepEnd(rep_id);
>>>>     copy = targetDb.open(doc._id);
>>>>     TEquals(null, copy);
>>>>   }
>>>> @@ -1359,7 +1375,7 @@ couchTests.replication = function(debug) {
>>>>
>>>>   var tasksAfter = JSON.parse(xhr.responseText);
>>>>   TEquals(tasks.length, tasksAfter.length);
>>>> -  waitForSeq(sourceDb, targetDb);
>>>> +  waitForSeq(sourceDb, targetDb, rep_id);
>>>>   T(sourceDb.open("30") !== null);
>>>>
>>>>   // cancel replication
>>>>
>>>
>>>
>>>
>>> --
>>> Filipe David Manana,
>>> fdmanana@gmail.com, fdmanana@apache.org
>>>
>>> "Reasonable men adapt themselves to the world.
>>>  Unreasonable men adapt the world to themselves.
>>>  That's why all progress depends on unreasonable men."
>>>
>>
>

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
Seems like that'd just end up eating its way into the code base
special casing all the various places we set headers.

On the other hand if we had a command line test runner and could very
specifically set headers without the confounding effects of running in
a browser this wouldn't even be an issue.

On Thu, Aug 11, 2011 at 2:29 AM, Robert Newson <rn...@apache.org> wrote:
> I wonder if we should add a custom request header like
> X-CouchDB-NoCache which sets all the cachebusting headers (Expires in
> the paste, etc) rather than that hack.
>
> On 11 August 2011 02:15, Filipe David Manana <fd...@apache.org> wrote:
>> On Wed, Aug 10, 2011 at 5:58 PM, Paul Davis <pa...@gmail.com> wrote:
>>>
>>> Since no one seems to have believed me I decided to take a closer look
>>
>> I believe you, and in my machine, replication.js, takes about 120ms.
>>
>>> at replication.js tests. And as I pointed out it was just polling a
>>> URL in a tight loop for 3s at a time. On my machine, this patch drops
>>> replication.js from 93329ms to 41785ms
>>
>> That's awesome. If it doesn't make assertions fails for others, go ahead.
>>
>> One thing I noticed in the past is that the browser seems to cache the
>> results of db.info() call. A solution (that is employed somewhere
>> else, but for another request) is to add some random parameter to the
>> query string, like  /db?anticache=Math.random(1000000).
>>
>>>. I'll point out that that's
>>> more than twice as fast. And that was just an obvious optimization
>>> from watching the log scroll. There are plenty more simple things that
>>> could be done to speed these up.
>>>
>>> Also, this patch makes me think that a _replication/localid -> JSON
>>> status blob might be useful. Though I dunno how possible that is. I
>>> reckon if we had that these would be sped up even more.
>>>
>>>
>>> diff --git a/share/www/script/couch.js b/share/www/script/couch.js
>>> index 304c9c1..792e638 100644
>>> --- a/share/www/script/couch.js
>>> +++ b/share/www/script/couch.js
>>> @@ -40,6 +40,8 @@ function CouchDB(name, httpHeaders) {
>>>     if (this.last_req.status == 404) {
>>>       return false;
>>>     }
>>> +    var t0 = new Date();
>>> +    while(true) {if((new Date()) - t0 > 100) break;}
>>>     CouchDB.maybeThrowError(this.last_req);
>>>     return JSON.parse(this.last_req.responseText);
>>>   };
>>> diff --git a/share/www/script/test/replication.js
>>> b/share/www/script/test/replication.js
>>> index 65c5eaa..b82375a 100644
>>> --- a/share/www/script/test/replication.js
>>> +++ b/share/www/script/test/replication.js
>>> @@ -149,24 +149,40 @@ couchTests.replication = function(debug) {
>>>   }
>>>
>>>
>>> -  function waitForSeq(sourceDb, targetDb) {
>>> -    var targetSeq,
>>> -        sourceSeq = sourceDb.info().update_seq,
>>> +  function waitForSeq(sourceDb, targetDb, rep_id) {
>>> +    var seq = sourceDb.info().update_seq,
>>> +        ri = new RegExp(rep_id),
>>> +        tasks,
>>>         t0 = new Date(),
>>>         t1,
>>>         ms = 3000;
>>>
>>>     do {
>>> -      targetSeq = targetDb.info().update_seq;
>>> +      tasks = JSON.parse(CouchDB.request("GET",
>>> "/_active_tasks").responseText);
>>> +      for(var i = 0; i < tasks.length; i++) {
>>> +        if(!ri.test(tasks[i].task)) continue;
>>> +        var captured = /Processed (\d+)/.exec(tasks[i].status);
>>> +        if(parseInt(captured[1]) >= seq) return;
>>> +        break;
>>> +      }
>>>       t1 = new Date();
>>> -    } while (((t1 - t0) <= ms) && targetSeq < sourceSeq);
>>> +    } while ((t1 - t0) <= ms);
>>>   }
>>>
>>> +  function waitForRepEnd(rep_id) {
>>> +    var ri = new RegExp(rep_id),
>>> +        tasks,
>>> +        t0 = new Date(),
>>> +        t1,
>>> +        ms = 3000;
>>>
>>> -  function wait(ms) {
>>> -    var t0 = new Date(), t1;
>>>     do {
>>> -      CouchDB.request("GET", "/");
>>> +      tasks = JSON.parse(CouchDB.request("GET",
>>> "/_active_tasks").responseText);
>>> +      var found = false;
>>> +      for(var i = 0; i < tasks.length; i++) {
>>> +        if(!ri.test(tasks[i].task)) found = true;
>>> +      }
>>> +      if(!found) return;
>>>       t1 = new Date();
>>>     } while ((t1 - t0) <= ms);
>>>   }
>>> @@ -1143,7 +1159,7 @@ couchTests.replication = function(debug) {
>>>
>>>     var rep_id = repResult._local_id;
>>>
>>> -    waitForSeq(sourceDb, targetDb);
>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>
>>>     for (j = 0; j < docs.length; j++) {
>>>       doc = docs[j];
>>> @@ -1181,7 +1197,7 @@ couchTests.replication = function(debug) {
>>>     var ddoc = docs[docs.length - 1]; // design doc
>>>     addAtt(sourceDb, ddoc, "readme.txt", att1_data, "text/plain");
>>>
>>> -    waitForSeq(sourceDb, targetDb);
>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>
>>>     var modifDocs = docs.slice(10, 15).concat([ddoc]);
>>>     for (j = 0; j < modifDocs.length; j++) {
>>> @@ -1226,7 +1242,7 @@ couchTests.replication = function(debug) {
>>>     // add another attachment to the ddoc on source
>>>     addAtt(sourceDb, ddoc, "data.dat", att2_data, "application/binary");
>>>
>>> -    waitForSeq(sourceDb, targetDb);
>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>
>>>     copy = targetDb.open(ddoc._id);
>>>     var atts = copy._attachments;
>>> @@ -1263,7 +1279,7 @@ couchTests.replication = function(debug) {
>>>     var newDocs = makeDocs(25, 35);
>>>     populateDb(sourceDb, newDocs, true);
>>>
>>> -    waitForSeq(sourceDb, targetDb);
>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>
>>>     for (j = 0; j < newDocs.length; j++) {
>>>       doc = newDocs[j];
>>> @@ -1282,7 +1298,7 @@ couchTests.replication = function(debug) {
>>>     TEquals(true, sourceDb.deleteDoc(newDocs[0]).ok);
>>>     TEquals(true, sourceDb.deleteDoc(newDocs[6]).ok);
>>>
>>> -    waitForSeq(sourceDb, targetDb);
>>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>>
>>>     copy = targetDb.open(newDocs[0]._id);
>>>     TEquals(null, copy);
>>> @@ -1317,7 +1333,7 @@ couchTests.replication = function(debug) {
>>>     };
>>>     TEquals(true, sourceDb.save(doc).ok);
>>>
>>> -    wait(2000);
>>> +    waitForRepEnd(rep_id);
>>>     copy = targetDb.open(doc._id);
>>>     TEquals(null, copy);
>>>   }
>>> @@ -1359,7 +1375,7 @@ couchTests.replication = function(debug) {
>>>
>>>   var tasksAfter = JSON.parse(xhr.responseText);
>>>   TEquals(tasks.length, tasksAfter.length);
>>> -  waitForSeq(sourceDb, targetDb);
>>> +  waitForSeq(sourceDb, targetDb, rep_id);
>>>   T(sourceDb.open("30") !== null);
>>>
>>>   // cancel replication
>>>
>>
>>
>>
>> --
>> Filipe David Manana,
>> fdmanana@gmail.com, fdmanana@apache.org
>>
>> "Reasonable men adapt themselves to the world.
>>  Unreasonable men adapt the world to themselves.
>>  That's why all progress depends on unreasonable men."
>>
>

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
>> The 'verify my install' on trunk strikes me as a bad idea. If someone
>> reports a bug and says that the verify passes, I'm always going to ask
>> them to run the test suite too.
>
> That is the intended way.

I agree with Jan on this one. Our test suite rarely finds a bug in the
wild compared to the number of times someone asks if the auth_cache
test should be failing. The bugs that it does turn up are quite often
the very same bugs for everyone, permissions problems, couchjs not
built correctly, etc etc.

Re: Futon Test Suite

Posted by Jan Lehnardt <ja...@apache.org>.
On Aug 11, 2011, at 1:23 PM, Robert Newson wrote:

> Somewhat OT but you reminded me of something.
> 
> The 'verify my install' on trunk strikes me as a bad idea. If someone
> reports a bug and says that the verify passes, I'm always going to ask
> them to run the test suite too.

That is the intended way.

> It exists, it seems, because the test suite is too slow.

It exists because the test suite is too brittle and does way too much to test whether an installation is generally sound.

> As Paul has noted, some tests are slow not because
> they are doing a lot of tests, as some have claimed, but because they
> unilaterally wait for several seconds per iteration. That is, they
> just aren't that well written.
> 
> Can we make the test suite faster, more reliable, and less intrusive
> (i.e, not blow away your admins, etc)?

Yes, but not without significant effort that in the past three years where we all agreed that the extensive test suite is a bad idea as is nobody has shouldered. So my hopes are low.

Cheers
Jan
-- 


> 
> B.
> 
> On 11 August 2011 11:57, Jason Smith <jh...@iriscouch.com> wrote:
>> On Thu, Aug 11, 2011 at 4:29 PM, Paul Davis <pa...@gmail.com> wrote:
>>> All very good except this one paragraph. The CouchDB definitely should
>>> not be expected to run with an intermediary server. If an intermediary
>>> is broken, its quite all right that we engineer paths around
>>> brokenness, but that's secondary by many orders of magnitude to
>>> asserting the behavior of CouchDB's API.
>> 
>> I buy that.
>> 
>> What about if and when the test suite splits into "confirm the
>> install" vs. a comprehensive unit tester? I suppose the comprehensive
>> test can demand a direct (or even null?) connection. But can "confirm
>> the install" be so bossy?
>> 
>> I guess the answer is also "yes." You are confirming end-to-end
>> (browser-to-server) functionality. If the proxy is breaking
>> expectations, then indeed you *want* to see a big red warning.
>> 
>> The only problem is this:
>> 
>> CouchDB developers are sitting pretty in the United States, maybe
>> Western Europe: basically the center of the universe. Everything is
>> fast, packets never drop, latency is an afterthought. Meanwhile,
>> across Latin America, Russia, China, and South and Southeast Asia
>> (that I know of, from support tickets), EDGE networking is everywhere.
>> Packets always drop. Non-standard transparent proxies are everywhere.
>> They are built in to ISPs. You cannot avoid them. Unlike the West,
>> HTTPS is not magic. There is huge latency, the handshakes take many
>> seconds to complete.
>> 
>> On stardate 47805.1, Commander Benjamin Sisko famously said:
>>> On Earth, there is no poverty, no crime, no war. You look out the
>>> window of Starfleet Headquarters and you see paradise. Well, it's
>>> easy to be a saint in paradise, but the Maquis do not live in paradise.
>>> Out there in the Demilitarized Zone, all the problems haven't been
>>> solved yet. Out there, there are no saints — just people. Angry, scared,
>>> determined people who are going to do whatever it takes to survive,
>>> whether it meets with Federation approval or not!
>> 
>> It is easy to be a saint in paradise. This applies to CouchDB and
>> Couch apps. On many ISPs, the non-standard, "transparent" proxies are
>> inescapable. But yet, web applications always work. The LA Times
>> works, news.google works, random wordpress blogs work, everything I've
>> ever clicked from Hacker News works. But yet Couch apps are joke. It
>> seems nothing is ever cached when it should be, and everything is
>> always cached when it shouldn't be. Authentication, in particular,
>> fails because caching proxies don't care about DELETE /_session.
>> 
>> I have no immediately actionable advice here, but my goal is just to
>> point out that, at some point, demanding standards-compliance becomes
>> bigotry, and if we are too ideologically pure, we risk alienating a
>> larger development community. And read those list of countries again.
>> These communities stand to gain the most from CouchDB and the p2p web.
>> 
>> --
>> Iris Couch
>> 


Re: Futon Test Suite

Posted by Robert Newson <rn...@apache.org>.
Somewhat OT but you reminded me of something.

The 'verify my install' on trunk strikes me as a bad idea. If someone
reports a bug and says that the verify passes, I'm always going to ask
them to run the test suite too. It exists, it seems, because the test
suite is too slow. As Paul has noted, some tests are slow not because
they are doing a lot of tests, as some have claimed, but because they
unilaterally wait for several seconds per iteration. That is, they
just aren't that well written.

Can we make the test suite faster, more reliable, and less intrusive
(i.e, not blow away your admins, etc)?

B.

On 11 August 2011 11:57, Jason Smith <jh...@iriscouch.com> wrote:
> On Thu, Aug 11, 2011 at 4:29 PM, Paul Davis <pa...@gmail.com> wrote:
>> All very good except this one paragraph. The CouchDB definitely should
>> not be expected to run with an intermediary server. If an intermediary
>> is broken, its quite all right that we engineer paths around
>> brokenness, but that's secondary by many orders of magnitude to
>> asserting the behavior of CouchDB's API.
>
> I buy that.
>
> What about if and when the test suite splits into "confirm the
> install" vs. a comprehensive unit tester? I suppose the comprehensive
> test can demand a direct (or even null?) connection. But can "confirm
> the install" be so bossy?
>
> I guess the answer is also "yes." You are confirming end-to-end
> (browser-to-server) functionality. If the proxy is breaking
> expectations, then indeed you *want* to see a big red warning.
>
> The only problem is this:
>
> CouchDB developers are sitting pretty in the United States, maybe
> Western Europe: basically the center of the universe. Everything is
> fast, packets never drop, latency is an afterthought. Meanwhile,
> across Latin America, Russia, China, and South and Southeast Asia
> (that I know of, from support tickets), EDGE networking is everywhere.
> Packets always drop. Non-standard transparent proxies are everywhere.
> They are built in to ISPs. You cannot avoid them. Unlike the West,
> HTTPS is not magic. There is huge latency, the handshakes take many
> seconds to complete.
>
> On stardate 47805.1, Commander Benjamin Sisko famously said:
>> On Earth, there is no poverty, no crime, no war. You look out the
>> window of Starfleet Headquarters and you see paradise. Well, it's
>> easy to be a saint in paradise, but the Maquis do not live in paradise.
>> Out there in the Demilitarized Zone, all the problems haven't been
>> solved yet. Out there, there are no saints — just people. Angry, scared,
>> determined people who are going to do whatever it takes to survive,
>> whether it meets with Federation approval or not!
>
> It is easy to be a saint in paradise. This applies to CouchDB and
> Couch apps. On many ISPs, the non-standard, "transparent" proxies are
> inescapable. But yet, web applications always work. The LA Times
> works, news.google works, random wordpress blogs work, everything I've
> ever clicked from Hacker News works. But yet Couch apps are joke. It
> seems nothing is ever cached when it should be, and everything is
> always cached when it shouldn't be. Authentication, in particular,
> fails because caching proxies don't care about DELETE /_session.
>
> I have no immediately actionable advice here, but my goal is just to
> point out that, at some point, demanding standards-compliance becomes
> bigotry, and if we are too ideologically pure, we risk alienating a
> larger development community. And read those list of countries again.
> These communities stand to gain the most from CouchDB and the p2p web.
>
> --
> Iris Couch
>

Re: Futon Test Suite

Posted by Adam Kocoloski <ko...@apache.org>.
Ah, you'd just embed the http-parser itself, reducing dependencies instead of trading one for another.  +1,

Adam

On Aug 15, 2011, at 10:41 AM, Paul Davis wrote:

> Not sure I follow what you mean there. When I mentioned node's HTTP
> parser, I meant, the parser [1]. I'd still have to write my own C
> adaptor for that to Spidermonkey objects. Not entirely certain on the
> REPL bit, but couchjs was basically a hack on top of the Spidermonkey
> js REPL so going back to our roots a bit there shouldn't be too hard.
> 
> [1] https://github.com/ry/http-parser
> 
> On Mon, Aug 15, 2011 at 8:38 AM, Adam Kocoloski <ko...@apache.org> wrote:
>> I thought about suggesting node's parser, especially since you'd get the REPL for free.  I think the downside is that there are roughly 300 versions of node out there, and I'd hate for our tests to keep breaking because of node's development pace.  libcurl is nothing if not stable.
>> 
>> Adam
>> 
>> On Aug 14, 2011, at 12:55 PM, Paul Davis wrote:
>> 
>>> My plan was to rewrite couch.js to use the new request/response
>>> classes internally and then when we need closer HTTP access we'd be
>>> able to have it. Same for T and Tequals. and what not. There is at
>>> least one test that we just can't make work in our current couchjs
>>> based test runner because it needs to use async HTTP requests, so at a
>>> certain point we have to at least add some of this stuff.
>>> 
>>> I quite like using etap over eunit as it seems more better. Also, now
>>> that we're going to a second language for make check tests, it seems
>>> like an even better approach. Though I'm not at all married to it by
>>> any means. Also, I do understand your concerns about moving parts and
>>> uncessesary dependencies. I should get around to updating the build
>>> system to use the single file etap distribution but its never really
>>> been a concern.
>>> 
>>> Another thing I've been contemplating is if it'd be beneficial to
>>> remove libcurl and replace it with node.js's parser or with the ragel
>>> parser from Mongrel. Anyway, food for thought. I'll be around this
>>> afternoon to hack.
>>> 
>>> On Sun, Aug 14, 2011 at 7:50 AM, Robert Dionne
>>> <di...@dionne-associates.com> wrote:
>>>> Paul,
>>>> 
>>>>  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective).
>>>> 
>>>>   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.
>>>> 
>>>>   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.
>>>> 
>>>>  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.
>>>> 
>>>>   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb
>>>> 
>>>> Bob
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:
>>>> 
>>>>> Here's a bit of a brain dump on the sort of environment I'd like to
>>>>> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
>>>>> them. Otherwise I'll start hacking on this at some point over the
>>>>> weekend.
>>>>> 
>>>>> https://gist.github.com/1142306
>>>> 
>>>> 
>> 
>> 


Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
Not sure I follow what you mean there. When I mentioned node's HTTP
parser, I meant, the parser [1]. I'd still have to write my own C
adaptor for that to Spidermonkey objects. Not entirely certain on the
REPL bit, but couchjs was basically a hack on top of the Spidermonkey
js REPL so going back to our roots a bit there shouldn't be too hard.

[1] https://github.com/ry/http-parser

On Mon, Aug 15, 2011 at 8:38 AM, Adam Kocoloski <ko...@apache.org> wrote:
> I thought about suggesting node's parser, especially since you'd get the REPL for free.  I think the downside is that there are roughly 300 versions of node out there, and I'd hate for our tests to keep breaking because of node's development pace.  libcurl is nothing if not stable.
>
> Adam
>
> On Aug 14, 2011, at 12:55 PM, Paul Davis wrote:
>
>> My plan was to rewrite couch.js to use the new request/response
>> classes internally and then when we need closer HTTP access we'd be
>> able to have it. Same for T and Tequals. and what not. There is at
>> least one test that we just can't make work in our current couchjs
>> based test runner because it needs to use async HTTP requests, so at a
>> certain point we have to at least add some of this stuff.
>>
>> I quite like using etap over eunit as it seems more better. Also, now
>> that we're going to a second language for make check tests, it seems
>> like an even better approach. Though I'm not at all married to it by
>> any means. Also, I do understand your concerns about moving parts and
>> uncessesary dependencies. I should get around to updating the build
>> system to use the single file etap distribution but its never really
>> been a concern.
>>
>> Another thing I've been contemplating is if it'd be beneficial to
>> remove libcurl and replace it with node.js's parser or with the ragel
>> parser from Mongrel. Anyway, food for thought. I'll be around this
>> afternoon to hack.
>>
>> On Sun, Aug 14, 2011 at 7:50 AM, Robert Dionne
>> <di...@dionne-associates.com> wrote:
>>> Paul,
>>>
>>>  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective).
>>>
>>>   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.
>>>
>>>   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.
>>>
>>>  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.
>>>
>>>   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb
>>>
>>> Bob
>>>
>>>
>>>
>>>
>>> On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:
>>>
>>>> Here's a bit of a brain dump on the sort of environment I'd like to
>>>> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
>>>> them. Otherwise I'll start hacking on this at some point over the
>>>> weekend.
>>>>
>>>> https://gist.github.com/1142306
>>>
>>>
>
>

Re: Futon Test Suite

Posted by Adam Kocoloski <ko...@apache.org>.
I thought about suggesting node's parser, especially since you'd get the REPL for free.  I think the downside is that there are roughly 300 versions of node out there, and I'd hate for our tests to keep breaking because of node's development pace.  libcurl is nothing if not stable.

Adam

On Aug 14, 2011, at 12:55 PM, Paul Davis wrote:

> My plan was to rewrite couch.js to use the new request/response
> classes internally and then when we need closer HTTP access we'd be
> able to have it. Same for T and Tequals. and what not. There is at
> least one test that we just can't make work in our current couchjs
> based test runner because it needs to use async HTTP requests, so at a
> certain point we have to at least add some of this stuff.
> 
> I quite like using etap over eunit as it seems more better. Also, now
> that we're going to a second language for make check tests, it seems
> like an even better approach. Though I'm not at all married to it by
> any means. Also, I do understand your concerns about moving parts and
> uncessesary dependencies. I should get around to updating the build
> system to use the single file etap distribution but its never really
> been a concern.
> 
> Another thing I've been contemplating is if it'd be beneficial to
> remove libcurl and replace it with node.js's parser or with the ragel
> parser from Mongrel. Anyway, food for thought. I'll be around this
> afternoon to hack.
> 
> On Sun, Aug 14, 2011 at 7:50 AM, Robert Dionne
> <di...@dionne-associates.com> wrote:
>> Paul,
>> 
>>  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective).
>> 
>>   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.
>> 
>>   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.
>> 
>>  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.
>> 
>>   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb
>> 
>> Bob
>> 
>> 
>> 
>> 
>> On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:
>> 
>>> Here's a bit of a brain dump on the sort of environment I'd like to
>>> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
>>> them. Otherwise I'll start hacking on this at some point over the
>>> weekend.
>>> 
>>> https://gist.github.com/1142306
>> 
>> 


Re: Futon Test Suite

Posted by Robert Dionne <di...@dionne-associates.com>.



On Aug 14, 2011, at 12:55 PM, Paul Davis wrote:

> My plan was to rewrite couch.js to use the new request/response
> classes internally and then when we need closer HTTP access we'd be
> able to have it. Same for T and Tequals. and what not. There is at
> least one test that we just can't make work in our current couchjs
> based test runner because it needs to use async HTTP requests, so at a
> certain point we have to at least add some of this stuff.
> 
> I quite like using etap over eunit as it seems more better. Also, now
> that we're going to a second language for make check tests, it seems
> like an even better approach. Though I'm not at all married to it by
> any means. Also, I do understand your concerns about moving parts and

That's fine with me, I'm not impressed with etap but it seems to have worked out well so far. By
moving parts I mean the usual thing: the more moving parts, third party libs, etc. the more things
to get right and make work on various machines. Eunit comes bundled with OTP


> uncessesary dependencies. I should get around to updating the build
> system to use the single file etap distribution but its never really
> been a concern.
> 
> Another thing I've been contemplating is if it'd be beneficial to
> remove libcurl and replace it with node.js's parser or with the ragel
> parser from Mongrel. Anyway, food for thought. I'll be around this
> afternoon to hack.
> 
> On Sun, Aug 14, 2011 at 7:50 AM, Robert Dionne
> <di...@dionne-associates.com> wrote:
>> Paul,
>> 
>>  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective).
>> 
>>   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.
>> 
>>   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.
>> 
>>  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.
>> 
>>   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb
>> 
>> Bob
>> 
>> 
>> 
>> 
>> On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:
>> 
>>> Here's a bit of a brain dump on the sort of environment I'd like to
>>> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
>>> them. Otherwise I'll start hacking on this at some point over the
>>> weekend.
>>> 
>>> https://gist.github.com/1142306
>> 
>> 


Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
My plan was to rewrite couch.js to use the new request/response
classes internally and then when we need closer HTTP access we'd be
able to have it. Same for T and Tequals. and what not. There is at
least one test that we just can't make work in our current couchjs
based test runner because it needs to use async HTTP requests, so at a
certain point we have to at least add some of this stuff.

I quite like using etap over eunit as it seems more better. Also, now
that we're going to a second language for make check tests, it seems
like an even better approach. Though I'm not at all married to it by
any means. Also, I do understand your concerns about moving parts and
uncessesary dependencies. I should get around to updating the build
system to use the single file etap distribution but its never really
been a concern.

Another thing I've been contemplating is if it'd be beneficial to
remove libcurl and replace it with node.js's parser or with the ragel
parser from Mongrel. Anyway, food for thought. I'll be around this
afternoon to hack.

On Sun, Aug 14, 2011 at 7:50 AM, Robert Dionne
<di...@dionne-associates.com> wrote:
> Paul,
>
>  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective).
>
>   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.
>
>   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.
>
>  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.
>
>   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb
>
> Bob
>
>
>
>
> On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:
>
>> Here's a bit of a brain dump on the sort of environment I'd like to
>> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
>> them. Otherwise I'll start hacking on this at some point over the
>> weekend.
>>
>> https://gist.github.com/1142306
>
>

Re: Futon Test Suite

Posted by Robert Dionne <di...@dionne-associates.com>.
Paul,

  This is interesting, and if you're willing to put together the new infrastructure I can help with writing tests. I would suggest a more incremental approach that's less of a rewrite (rewrites often just get you back to 0 from a user's perspective). 

   The existing CouchDB JS object seems to work ok in terms of the http interface, and the Futon tests more or less all ran using couchjs until very recently. I would suggest getting these all running first, reusing copies of the existing CouchDB objects and such so we can hack them as needed. Then we would review and throw out all the tests that are not part of the core APIs, like the coffee stuff (I don't know why we decided to bundle coffee in there) and any tests that are for specific internals.

   At some point something like BigCouch is integrated in or MobileCouch we might have different "make" targets for the different deployments. Perhaps in that case we'd have different sets of tests. There needs to be a set of tests that can verify that the semantics of API calls is the same in CouchDB and BigCouch.

  So I'd say let's work backwards from what we have. Also I'm not a big fan of etap, preferring eunit mainly because it's one less moving part. For JS we already have this T(...) and TEquals(....) funs which seem to do the trick.

   All that said, I have a few hours today to hack on this today if you want some help just ping me on #couchdb

Bob




On Aug 12, 2011, at 11:46 AM, Paul Davis wrote:

> Here's a bit of a brain dump on the sort of environment I'd like to
> see our CLI JS tests have. If anyone has any thoughts I'd like to hear
> them. Otherwise I'll start hacking on this at some point over the
> weekend.
> 
> https://gist.github.com/1142306


Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
Here's a bit of a brain dump on the sort of environment I'd like to
see our CLI JS tests have. If anyone has any thoughts I'd like to hear
them. Otherwise I'll start hacking on this at some point over the
weekend.

https://gist.github.com/1142306

Re: Futon Test Suite

Posted by Jason Smith <jh...@iriscouch.com>.
On Thu, Aug 11, 2011 at 11:46 PM, Paul Davis
<pa...@gmail.com> wrote:
> Your points are definitely well taken, and these are the sorts of
> things we should be engineering for and around. But I'll bring the arc
> back to CouchDB. Our test suite has the specific purpose of asserting
> that CouchDB behaves the way we want. Intermediaries only serve to
> confound the results of these tests, "Did CouchDB fail? Or is it
> something in the middle?" which doesn't do anyone any good.

Quite right. The more I think about it, the more I shift back to a
point I've made often: CouchDB doesn't need core features, it really
needs tooling and development tools built on top of it (relatively
speaking).

So, for example, I'd want a pedantic, fussy, standards-compliant,
reliable foundation (the couch), and then more advanced application
frameworks doing the dirty work like detecting transport trouble,
detecting overzealous caching, etc. No doubt, that is what Google,
Facebook, and other huge sites are doing. (Although they have largely
migrated to https on CDNs to improve consistency among other
benefits.)

I suppose this discussion and the work related to it is going in
exactly this direction, so, yay! Thanks for the feedback, Paul.

-- 
Iris Couch

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Thu, Aug 11, 2011 at 5:57 AM, Jason Smith <jh...@iriscouch.com> wrote:
> On Thu, Aug 11, 2011 at 4:29 PM, Paul Davis <pa...@gmail.com> wrote:
>> All very good except this one paragraph. The CouchDB definitely should
>> not be expected to run with an intermediary server. If an intermediary
>> is broken, its quite all right that we engineer paths around
>> brokenness, but that's secondary by many orders of magnitude to
>> asserting the behavior of CouchDB's API.
>
> I buy that.
>
> What about if and when the test suite splits into "confirm the
> install" vs. a comprehensive unit tester? I suppose the comprehensive
> test can demand a direct (or even null?) connection. But can "confirm
> the install" be so bossy?
>
> I guess the answer is also "yes." You are confirming end-to-end
> (browser-to-server) functionality. If the proxy is breaking
> expectations, then indeed you *want* to see a big red warning.
>
> The only problem is this:
>
> CouchDB developers are sitting pretty in the United States, maybe
> Western Europe: basically the center of the universe. Everything is
> fast, packets never drop, latency is an afterthought. Meanwhile,
> across Latin America, Russia, China, and South and Southeast Asia
> (that I know of, from support tickets), EDGE networking is everywhere.
> Packets always drop. Non-standard transparent proxies are everywhere.
> They are built in to ISPs. You cannot avoid them. Unlike the West,
> HTTPS is not magic. There is huge latency, the handshakes take many
> seconds to complete.
>
> On stardate 47805.1, Commander Benjamin Sisko famously said:
>> On Earth, there is no poverty, no crime, no war. You look out the
>> window of Starfleet Headquarters and you see paradise. Well, it's
>> easy to be a saint in paradise, but the Maquis do not live in paradise.
>> Out there in the Demilitarized Zone, all the problems haven't been
>> solved yet. Out there, there are no saints — just people. Angry, scared,
>> determined people who are going to do whatever it takes to survive,
>> whether it meets with Federation approval or not!
>
> It is easy to be a saint in paradise. This applies to CouchDB and
> Couch apps. On many ISPs, the non-standard, "transparent" proxies are
> inescapable. But yet, web applications always work. The LA Times
> works, news.google works, random wordpress blogs work, everything I've
> ever clicked from Hacker News works. But yet Couch apps are joke. It
> seems nothing is ever cached when it should be, and everything is
> always cached when it shouldn't be. Authentication, in particular,
> fails because caching proxies don't care about DELETE /_session.
>
> I have no immediately actionable advice here, but my goal is just to
> point out that, at some point, demanding standards-compliance becomes
> bigotry, and if we are too ideologically pure, we risk alienating a
> larger development community. And read those list of countries again.
> These communities stand to gain the most from CouchDB and the p2p web.
>
> --
> Iris Couch
>

Your points are definitely well taken, and these are the sorts of
things we should be engineering for and around. But I'll bring the arc
back to CouchDB. Our test suite has the specific purpose of asserting
that CouchDB behaves the way we want. Intermediaries only serve to
confound the results of these tests, "Did CouchDB fail? Or is it
something in the middle?" which doesn't do anyone any good.

On the other hand, I'm all for dealing with intermediaries by making
adjustments to our various API's (and then including tests in the test
suite to assert the different behavior). And even adding pieces to the
Futon installation verification to check deep dark corners of bad
proxy behavior. Or even having an extended set of "proxy tests". But
these sorts of tests aren't testing CouchDB, they're testing how HTTP
proxies traverse bad intermediaries.

I'm know this might be a subtle point, but I think its quite important
to make the distinction. The Apache CouchDB Test Suite (proper noun)
should be all about testing Apache CouchDB as possible. Specifically,
it shouldn't be testing things that are not Apache CouchDB. Like
broken proxies. On the other hand an "Apache CouchDB Client Access
Test Suite" that checks for all sorts of broken user agents and
intermediaries would be an excellent addition to our tests.

Re: Futon Test Suite

Posted by Jason Smith <jh...@iriscouch.com>.
On Thu, Aug 11, 2011 at 4:29 PM, Paul Davis <pa...@gmail.com> wrote:
> All very good except this one paragraph. The CouchDB definitely should
> not be expected to run with an intermediary server. If an intermediary
> is broken, its quite all right that we engineer paths around
> brokenness, but that's secondary by many orders of magnitude to
> asserting the behavior of CouchDB's API.

I buy that.

What about if and when the test suite splits into "confirm the
install" vs. a comprehensive unit tester? I suppose the comprehensive
test can demand a direct (or even null?) connection. But can "confirm
the install" be so bossy?

I guess the answer is also "yes." You are confirming end-to-end
(browser-to-server) functionality. If the proxy is breaking
expectations, then indeed you *want* to see a big red warning.

The only problem is this:

CouchDB developers are sitting pretty in the United States, maybe
Western Europe: basically the center of the universe. Everything is
fast, packets never drop, latency is an afterthought. Meanwhile,
across Latin America, Russia, China, and South and Southeast Asia
(that I know of, from support tickets), EDGE networking is everywhere.
Packets always drop. Non-standard transparent proxies are everywhere.
They are built in to ISPs. You cannot avoid them. Unlike the West,
HTTPS is not magic. There is huge latency, the handshakes take many
seconds to complete.

On stardate 47805.1, Commander Benjamin Sisko famously said:
> On Earth, there is no poverty, no crime, no war. You look out the
> window of Starfleet Headquarters and you see paradise. Well, it's
> easy to be a saint in paradise, but the Maquis do not live in paradise.
> Out there in the Demilitarized Zone, all the problems haven't been
> solved yet. Out there, there are no saints — just people. Angry, scared,
> determined people who are going to do whatever it takes to survive,
> whether it meets with Federation approval or not!

It is easy to be a saint in paradise. This applies to CouchDB and
Couch apps. On many ISPs, the non-standard, "transparent" proxies are
inescapable. But yet, web applications always work. The LA Times
works, news.google works, random wordpress blogs work, everything I've
ever clicked from Hacker News works. But yet Couch apps are joke. It
seems nothing is ever cached when it should be, and everything is
always cached when it shouldn't be. Authentication, in particular,
fails because caching proxies don't care about DELETE /_session.

I have no immediately actionable advice here, but my goal is just to
point out that, at some point, demanding standards-compliance becomes
bigotry, and if we are too ideologically pure, we risk alienating a
larger development community. And read those list of countries again.
These communities stand to gain the most from CouchDB and the p2p web.

-- 
Iris Couch

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Thu, Aug 11, 2011 at 3:47 AM, Jason Smith <jh...@iriscouch.com> wrote:
> On Thu, Aug 11, 2011 at 2:29 PM, Robert Newson <rn...@apache.org> wrote:
>> I wonder if we should add a custom request header like
>> X-CouchDB-NoCache which sets all the cachebusting headers (Expires in
>> the paste, etc) rather than that hack.
>
> There are several reasons for an invalid (cached) response.
>
> * reverse-proxies
> * forward proxies
> * transparent forward proxies
> * Browsers
>
> People are testing couch in these and probably other environments,
> knowingly and unknowingly.
>
> I am not so sure that setting headers into the past would fix things
> in enough cases. The point of the ?_=$RANDOM (used in jQuery and RoR
> also) is because querying a completely different URL is the only way
> to be sure of a cache miss.
>
> Indeed, Varnish's "pain pill" sales pitch is, "put me in front of your
> slow web app and it will become faster." Well, that is because
> Varnish, prudently, disregards all of your server's cache metadata and
> basically caches everything it can. And people love it for that. (Yes,
> you can configure Varnish properly, but I am describing the pain-pill
> scenario where Rails is crashing under load.)
>
> I guess my point is, the test suite should work through caching
> proxies, and cache-busting based on headers is AFAIK unlikely to
> succeed.
>

All very good except this one paragraph. The CouchDB definitely should
not be expected to run with an intermediary server. If an intermediary
is broken, its quite all right that we engineer paths around
brokenness, but that's secondary by many orders of magnitude to
asserting the behavior of CouchDB's API.

> If, however, CouchDB did support a mechanism to make everything as
> un-cacheable as possible, I would rather see it as a _config setting
> rather than a header. Adding headers to queries is a way to invalidate
> an oauth signature. And, I don't know, for debugging and
> troubleshooting, if you command CouchDB to act funny, I'd rather that
> command to persist there in the config rather than I have to discover
> it myself in Wireshark.
>
> --
> Iris Couch
>

Re: Futon Test Suite

Posted by Jason Smith <jh...@iriscouch.com>.
On Thu, Aug 11, 2011 at 2:29 PM, Robert Newson <rn...@apache.org> wrote:
> I wonder if we should add a custom request header like
> X-CouchDB-NoCache which sets all the cachebusting headers (Expires in
> the paste, etc) rather than that hack.

There are several reasons for an invalid (cached) response.

* reverse-proxies
* forward proxies
* transparent forward proxies
* Browsers

People are testing couch in these and probably other environments,
knowingly and unknowingly.

I am not so sure that setting headers into the past would fix things
in enough cases. The point of the ?_=$RANDOM (used in jQuery and RoR
also) is because querying a completely different URL is the only way
to be sure of a cache miss.

Indeed, Varnish's "pain pill" sales pitch is, "put me in front of your
slow web app and it will become faster." Well, that is because
Varnish, prudently, disregards all of your server's cache metadata and
basically caches everything it can. And people love it for that. (Yes,
you can configure Varnish properly, but I am describing the pain-pill
scenario where Rails is crashing under load.)

I guess my point is, the test suite should work through caching
proxies, and cache-busting based on headers is AFAIK unlikely to
succeed.

If, however, CouchDB did support a mechanism to make everything as
un-cacheable as possible, I would rather see it as a _config setting
rather than a header. Adding headers to queries is a way to invalidate
an oauth signature. And, I don't know, for debugging and
troubleshooting, if you command CouchDB to act funny, I'd rather that
command to persist there in the config rather than I have to discover
it myself in Wireshark.

-- 
Iris Couch

Re: Futon Test Suite

Posted by Robert Newson <rn...@apache.org>.
I wonder if we should add a custom request header like
X-CouchDB-NoCache which sets all the cachebusting headers (Expires in
the paste, etc) rather than that hack.

On 11 August 2011 02:15, Filipe David Manana <fd...@apache.org> wrote:
> On Wed, Aug 10, 2011 at 5:58 PM, Paul Davis <pa...@gmail.com> wrote:
>>
>> Since no one seems to have believed me I decided to take a closer look
>
> I believe you, and in my machine, replication.js, takes about 120ms.
>
>> at replication.js tests. And as I pointed out it was just polling a
>> URL in a tight loop for 3s at a time. On my machine, this patch drops
>> replication.js from 93329ms to 41785ms
>
> That's awesome. If it doesn't make assertions fails for others, go ahead.
>
> One thing I noticed in the past is that the browser seems to cache the
> results of db.info() call. A solution (that is employed somewhere
> else, but for another request) is to add some random parameter to the
> query string, like  /db?anticache=Math.random(1000000).
>
>>. I'll point out that that's
>> more than twice as fast. And that was just an obvious optimization
>> from watching the log scroll. There are plenty more simple things that
>> could be done to speed these up.
>>
>> Also, this patch makes me think that a _replication/localid -> JSON
>> status blob might be useful. Though I dunno how possible that is. I
>> reckon if we had that these would be sped up even more.
>>
>>
>> diff --git a/share/www/script/couch.js b/share/www/script/couch.js
>> index 304c9c1..792e638 100644
>> --- a/share/www/script/couch.js
>> +++ b/share/www/script/couch.js
>> @@ -40,6 +40,8 @@ function CouchDB(name, httpHeaders) {
>>     if (this.last_req.status == 404) {
>>       return false;
>>     }
>> +    var t0 = new Date();
>> +    while(true) {if((new Date()) - t0 > 100) break;}
>>     CouchDB.maybeThrowError(this.last_req);
>>     return JSON.parse(this.last_req.responseText);
>>   };
>> diff --git a/share/www/script/test/replication.js
>> b/share/www/script/test/replication.js
>> index 65c5eaa..b82375a 100644
>> --- a/share/www/script/test/replication.js
>> +++ b/share/www/script/test/replication.js
>> @@ -149,24 +149,40 @@ couchTests.replication = function(debug) {
>>   }
>>
>>
>> -  function waitForSeq(sourceDb, targetDb) {
>> -    var targetSeq,
>> -        sourceSeq = sourceDb.info().update_seq,
>> +  function waitForSeq(sourceDb, targetDb, rep_id) {
>> +    var seq = sourceDb.info().update_seq,
>> +        ri = new RegExp(rep_id),
>> +        tasks,
>>         t0 = new Date(),
>>         t1,
>>         ms = 3000;
>>
>>     do {
>> -      targetSeq = targetDb.info().update_seq;
>> +      tasks = JSON.parse(CouchDB.request("GET",
>> "/_active_tasks").responseText);
>> +      for(var i = 0; i < tasks.length; i++) {
>> +        if(!ri.test(tasks[i].task)) continue;
>> +        var captured = /Processed (\d+)/.exec(tasks[i].status);
>> +        if(parseInt(captured[1]) >= seq) return;
>> +        break;
>> +      }
>>       t1 = new Date();
>> -    } while (((t1 - t0) <= ms) && targetSeq < sourceSeq);
>> +    } while ((t1 - t0) <= ms);
>>   }
>>
>> +  function waitForRepEnd(rep_id) {
>> +    var ri = new RegExp(rep_id),
>> +        tasks,
>> +        t0 = new Date(),
>> +        t1,
>> +        ms = 3000;
>>
>> -  function wait(ms) {
>> -    var t0 = new Date(), t1;
>>     do {
>> -      CouchDB.request("GET", "/");
>> +      tasks = JSON.parse(CouchDB.request("GET",
>> "/_active_tasks").responseText);
>> +      var found = false;
>> +      for(var i = 0; i < tasks.length; i++) {
>> +        if(!ri.test(tasks[i].task)) found = true;
>> +      }
>> +      if(!found) return;
>>       t1 = new Date();
>>     } while ((t1 - t0) <= ms);
>>   }
>> @@ -1143,7 +1159,7 @@ couchTests.replication = function(debug) {
>>
>>     var rep_id = repResult._local_id;
>>
>> -    waitForSeq(sourceDb, targetDb);
>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>
>>     for (j = 0; j < docs.length; j++) {
>>       doc = docs[j];
>> @@ -1181,7 +1197,7 @@ couchTests.replication = function(debug) {
>>     var ddoc = docs[docs.length - 1]; // design doc
>>     addAtt(sourceDb, ddoc, "readme.txt", att1_data, "text/plain");
>>
>> -    waitForSeq(sourceDb, targetDb);
>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>
>>     var modifDocs = docs.slice(10, 15).concat([ddoc]);
>>     for (j = 0; j < modifDocs.length; j++) {
>> @@ -1226,7 +1242,7 @@ couchTests.replication = function(debug) {
>>     // add another attachment to the ddoc on source
>>     addAtt(sourceDb, ddoc, "data.dat", att2_data, "application/binary");
>>
>> -    waitForSeq(sourceDb, targetDb);
>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>
>>     copy = targetDb.open(ddoc._id);
>>     var atts = copy._attachments;
>> @@ -1263,7 +1279,7 @@ couchTests.replication = function(debug) {
>>     var newDocs = makeDocs(25, 35);
>>     populateDb(sourceDb, newDocs, true);
>>
>> -    waitForSeq(sourceDb, targetDb);
>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>
>>     for (j = 0; j < newDocs.length; j++) {
>>       doc = newDocs[j];
>> @@ -1282,7 +1298,7 @@ couchTests.replication = function(debug) {
>>     TEquals(true, sourceDb.deleteDoc(newDocs[0]).ok);
>>     TEquals(true, sourceDb.deleteDoc(newDocs[6]).ok);
>>
>> -    waitForSeq(sourceDb, targetDb);
>> +    waitForSeq(sourceDb, targetDb, rep_id);
>>
>>     copy = targetDb.open(newDocs[0]._id);
>>     TEquals(null, copy);
>> @@ -1317,7 +1333,7 @@ couchTests.replication = function(debug) {
>>     };
>>     TEquals(true, sourceDb.save(doc).ok);
>>
>> -    wait(2000);
>> +    waitForRepEnd(rep_id);
>>     copy = targetDb.open(doc._id);
>>     TEquals(null, copy);
>>   }
>> @@ -1359,7 +1375,7 @@ couchTests.replication = function(debug) {
>>
>>   var tasksAfter = JSON.parse(xhr.responseText);
>>   TEquals(tasks.length, tasksAfter.length);
>> -  waitForSeq(sourceDb, targetDb);
>> +  waitForSeq(sourceDb, targetDb, rep_id);
>>   T(sourceDb.open("30") !== null);
>>
>>   // cancel replication
>>
>
>
>
> --
> Filipe David Manana,
> fdmanana@gmail.com, fdmanana@apache.org
>
> "Reasonable men adapt themselves to the world.
>  Unreasonable men adapt the world to themselves.
>  That's why all progress depends on unreasonable men."
>

Re: Futon Test Suite

Posted by Filipe David Manana <fd...@apache.org>.
On Wed, Aug 10, 2011 at 5:58 PM, Paul Davis <pa...@gmail.com> wrote:
>
> Since no one seems to have believed me I decided to take a closer look

I believe you, and in my machine, replication.js, takes about 120ms.

> at replication.js tests. And as I pointed out it was just polling a
> URL in a tight loop for 3s at a time. On my machine, this patch drops
> replication.js from 93329ms to 41785ms

That's awesome. If it doesn't make assertions fails for others, go ahead.

One thing I noticed in the past is that the browser seems to cache the
results of db.info() call. A solution (that is employed somewhere
else, but for another request) is to add some random parameter to the
query string, like  /db?anticache=Math.random(1000000).

>. I'll point out that that's
> more than twice as fast. And that was just an obvious optimization
> from watching the log scroll. There are plenty more simple things that
> could be done to speed these up.
>
> Also, this patch makes me think that a _replication/localid -> JSON
> status blob might be useful. Though I dunno how possible that is. I
> reckon if we had that these would be sped up even more.
>
>
> diff --git a/share/www/script/couch.js b/share/www/script/couch.js
> index 304c9c1..792e638 100644
> --- a/share/www/script/couch.js
> +++ b/share/www/script/couch.js
> @@ -40,6 +40,8 @@ function CouchDB(name, httpHeaders) {
>     if (this.last_req.status == 404) {
>       return false;
>     }
> +    var t0 = new Date();
> +    while(true) {if((new Date()) - t0 > 100) break;}
>     CouchDB.maybeThrowError(this.last_req);
>     return JSON.parse(this.last_req.responseText);
>   };
> diff --git a/share/www/script/test/replication.js
> b/share/www/script/test/replication.js
> index 65c5eaa..b82375a 100644
> --- a/share/www/script/test/replication.js
> +++ b/share/www/script/test/replication.js
> @@ -149,24 +149,40 @@ couchTests.replication = function(debug) {
>   }
>
>
> -  function waitForSeq(sourceDb, targetDb) {
> -    var targetSeq,
> -        sourceSeq = sourceDb.info().update_seq,
> +  function waitForSeq(sourceDb, targetDb, rep_id) {
> +    var seq = sourceDb.info().update_seq,
> +        ri = new RegExp(rep_id),
> +        tasks,
>         t0 = new Date(),
>         t1,
>         ms = 3000;
>
>     do {
> -      targetSeq = targetDb.info().update_seq;
> +      tasks = JSON.parse(CouchDB.request("GET",
> "/_active_tasks").responseText);
> +      for(var i = 0; i < tasks.length; i++) {
> +        if(!ri.test(tasks[i].task)) continue;
> +        var captured = /Processed (\d+)/.exec(tasks[i].status);
> +        if(parseInt(captured[1]) >= seq) return;
> +        break;
> +      }
>       t1 = new Date();
> -    } while (((t1 - t0) <= ms) && targetSeq < sourceSeq);
> +    } while ((t1 - t0) <= ms);
>   }
>
> +  function waitForRepEnd(rep_id) {
> +    var ri = new RegExp(rep_id),
> +        tasks,
> +        t0 = new Date(),
> +        t1,
> +        ms = 3000;
>
> -  function wait(ms) {
> -    var t0 = new Date(), t1;
>     do {
> -      CouchDB.request("GET", "/");
> +      tasks = JSON.parse(CouchDB.request("GET",
> "/_active_tasks").responseText);
> +      var found = false;
> +      for(var i = 0; i < tasks.length; i++) {
> +        if(!ri.test(tasks[i].task)) found = true;
> +      }
> +      if(!found) return;
>       t1 = new Date();
>     } while ((t1 - t0) <= ms);
>   }
> @@ -1143,7 +1159,7 @@ couchTests.replication = function(debug) {
>
>     var rep_id = repResult._local_id;
>
> -    waitForSeq(sourceDb, targetDb);
> +    waitForSeq(sourceDb, targetDb, rep_id);
>
>     for (j = 0; j < docs.length; j++) {
>       doc = docs[j];
> @@ -1181,7 +1197,7 @@ couchTests.replication = function(debug) {
>     var ddoc = docs[docs.length - 1]; // design doc
>     addAtt(sourceDb, ddoc, "readme.txt", att1_data, "text/plain");
>
> -    waitForSeq(sourceDb, targetDb);
> +    waitForSeq(sourceDb, targetDb, rep_id);
>
>     var modifDocs = docs.slice(10, 15).concat([ddoc]);
>     for (j = 0; j < modifDocs.length; j++) {
> @@ -1226,7 +1242,7 @@ couchTests.replication = function(debug) {
>     // add another attachment to the ddoc on source
>     addAtt(sourceDb, ddoc, "data.dat", att2_data, "application/binary");
>
> -    waitForSeq(sourceDb, targetDb);
> +    waitForSeq(sourceDb, targetDb, rep_id);
>
>     copy = targetDb.open(ddoc._id);
>     var atts = copy._attachments;
> @@ -1263,7 +1279,7 @@ couchTests.replication = function(debug) {
>     var newDocs = makeDocs(25, 35);
>     populateDb(sourceDb, newDocs, true);
>
> -    waitForSeq(sourceDb, targetDb);
> +    waitForSeq(sourceDb, targetDb, rep_id);
>
>     for (j = 0; j < newDocs.length; j++) {
>       doc = newDocs[j];
> @@ -1282,7 +1298,7 @@ couchTests.replication = function(debug) {
>     TEquals(true, sourceDb.deleteDoc(newDocs[0]).ok);
>     TEquals(true, sourceDb.deleteDoc(newDocs[6]).ok);
>
> -    waitForSeq(sourceDb, targetDb);
> +    waitForSeq(sourceDb, targetDb, rep_id);
>
>     copy = targetDb.open(newDocs[0]._id);
>     TEquals(null, copy);
> @@ -1317,7 +1333,7 @@ couchTests.replication = function(debug) {
>     };
>     TEquals(true, sourceDb.save(doc).ok);
>
> -    wait(2000);
> +    waitForRepEnd(rep_id);
>     copy = targetDb.open(doc._id);
>     TEquals(null, copy);
>   }
> @@ -1359,7 +1375,7 @@ couchTests.replication = function(debug) {
>
>   var tasksAfter = JSON.parse(xhr.responseText);
>   TEquals(tasks.length, tasksAfter.length);
> -  waitForSeq(sourceDb, targetDb);
> +  waitForSeq(sourceDb, targetDb, rep_id);
>   T(sourceDb.open("30") !== null);
>
>   // cancel replication
>



-- 
Filipe David Manana,
fdmanana@gmail.com, fdmanana@apache.org

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Wed, Aug 10, 2011 at 12:49 AM, Paul Davis
<pa...@gmail.com> wrote:
> On Tue, Aug 9, 2011 at 8:11 PM, Filipe David Manana <fd...@apache.org> wrote:
>> On Mon, Aug 8, 2011 at 11:43 PM, Paul Davis <pa...@gmail.com> wrote:
>>> The entire test suite takes about 4 minutes to run on a semi recent
>>> MBA. Most of the tests are fairly speedy, but four of them stick out
>>> quite drastically:
>>>
>>> delayed_commits: 15s
>>> design_docs: 25s
>>> replication: 90s
>>> replcaitor_db: 60s
>>
>> The replication.js test grew a lot after the new replicator was
>> introduced. Basically it covers a lot more scenarios then the old
>> replication.js test, and tests with larger amounts of documents and
>> continuous replications.
>> I think this is a good thing and inevitable (due to bug fixes, new
>> features, etc).
>>
>> The replicator_db.js does several server restart calls, which are
>> necessary to test this feature.
>>
>> After Jan's patch to add a "verify installation" feature to Futon, I
>> don't think individual tests taking 1, 2 or 5 minutes are an issue, as
>> long as they succeed.
>> For a database management system, having much more comprehensive tests
>> (which mean that they take longer to run) is a good thing.
>>
>> I agree with everything said in this thread.
>>
>
> I only mention the replication tests specifically because it seems
> like they spend a lot of time polling database info objects and the
> logs fly by without any other log messages. I was mostly wondering if
> this was related to a gen_server timeout or commit_after message. On
> the other hand, we should also probably start thinking about
> hierarchical testing schemes. replication.js is over 1.5K loc which
> seems awfully heaving for a single test. These sorts of things will
> help when we want to run certain parts of the suite continuously while
> hacking and then run the full thing before committing.
>
> Also, I think you're spot on about Jan's patch. We should turn Futon's
> tests into a "Your node is functioning suite" and move the test suite
> to the CLI so we can be much more specific in our testing.
>
> And randomly it occurs to me that maybe we should re-evaluate our use
> of init:restart during testing. I know it gives us a clean slate, but
> perhaps having a "randomize test order" would be more useful for
> detecting failures that are non-obvious. Granted that introduces
> obvious difficulties with incompatible tests (ie things that test
> behavior for multiple values of a specific config setting).
>
>>>
>>> I haven't dug into these too much yet. The only thing I've noticed is
>>> that replication and relplicator_db seem to spend a lot of their time
>>> polling various URLs waiting for task completion. Perhaps I'm just
>>> being impatient but it seems that each poll lasts an uncessarily long
>>> time for a unit tests (3-5s) so I was wondering if we were hitting a
>>> timeout or something.
>>>
>>> If anyone wants to dig into the test suite and see if they can't speed
>>> these up or even just split them apart so we know what's taking awhile
>>> that'd be super awesome.
>>>
>>> Also, I've been thinking more and more about beefing up the JavaScript
>>> test suite runner and moving more of our browser tests over to
>>> dedicated code in those tests. If anyone's interested in hacking on
>>> some C and JavaScript against an HTTP API, let me know.
>>>
>>
>>
>>
>> --
>> Filipe David Manana,
>> fdmanana@gmail.com, fdmanana@apache.org
>>
>> "Reasonable men adapt themselves to the world.
>>  Unreasonable men adapt the world to themselves.
>>  That's why all progress depends on unreasonable men."
>>
>

Since no one seems to have believed me I decided to take a closer look
at replication.js tests. And as I pointed out it was just polling a
URL in a tight loop for 3s at a time. On my machine, this patch drops
replication.js from 93329ms to 41785ms. I'll point out that that's
more than twice as fast. And that was just an obvious optimization
from watching the log scroll. There are plenty more simple things that
could be done to speed these up.

Also, this patch makes me think that a _replication/localid -> JSON
status blob might be useful. Though I dunno how possible that is. I
reckon if we had that these would be sped up even more.


diff --git a/share/www/script/couch.js b/share/www/script/couch.js
index 304c9c1..792e638 100644
--- a/share/www/script/couch.js
+++ b/share/www/script/couch.js
@@ -40,6 +40,8 @@ function CouchDB(name, httpHeaders) {
     if (this.last_req.status == 404) {
       return false;
     }
+    var t0 = new Date();
+    while(true) {if((new Date()) - t0 > 100) break;}
     CouchDB.maybeThrowError(this.last_req);
     return JSON.parse(this.last_req.responseText);
   };
diff --git a/share/www/script/test/replication.js
b/share/www/script/test/replication.js
index 65c5eaa..b82375a 100644
--- a/share/www/script/test/replication.js
+++ b/share/www/script/test/replication.js
@@ -149,24 +149,40 @@ couchTests.replication = function(debug) {
   }


-  function waitForSeq(sourceDb, targetDb) {
-    var targetSeq,
-        sourceSeq = sourceDb.info().update_seq,
+  function waitForSeq(sourceDb, targetDb, rep_id) {
+    var seq = sourceDb.info().update_seq,
+        ri = new RegExp(rep_id),
+        tasks,
         t0 = new Date(),
         t1,
         ms = 3000;

     do {
-      targetSeq = targetDb.info().update_seq;
+      tasks = JSON.parse(CouchDB.request("GET",
"/_active_tasks").responseText);
+      for(var i = 0; i < tasks.length; i++) {
+        if(!ri.test(tasks[i].task)) continue;
+        var captured = /Processed (\d+)/.exec(tasks[i].status);
+        if(parseInt(captured[1]) >= seq) return;
+        break;
+      }
       t1 = new Date();
-    } while (((t1 - t0) <= ms) && targetSeq < sourceSeq);
+    } while ((t1 - t0) <= ms);
   }

+  function waitForRepEnd(rep_id) {
+    var ri = new RegExp(rep_id),
+        tasks,
+        t0 = new Date(),
+        t1,
+        ms = 3000;

-  function wait(ms) {
-    var t0 = new Date(), t1;
     do {
-      CouchDB.request("GET", "/");
+      tasks = JSON.parse(CouchDB.request("GET",
"/_active_tasks").responseText);
+      var found = false;
+      for(var i = 0; i < tasks.length; i++) {
+        if(!ri.test(tasks[i].task)) found = true;
+      }
+      if(!found) return;
       t1 = new Date();
     } while ((t1 - t0) <= ms);
   }
@@ -1143,7 +1159,7 @@ couchTests.replication = function(debug) {

     var rep_id = repResult._local_id;

-    waitForSeq(sourceDb, targetDb);
+    waitForSeq(sourceDb, targetDb, rep_id);

     for (j = 0; j < docs.length; j++) {
       doc = docs[j];
@@ -1181,7 +1197,7 @@ couchTests.replication = function(debug) {
     var ddoc = docs[docs.length - 1]; // design doc
     addAtt(sourceDb, ddoc, "readme.txt", att1_data, "text/plain");

-    waitForSeq(sourceDb, targetDb);
+    waitForSeq(sourceDb, targetDb, rep_id);

     var modifDocs = docs.slice(10, 15).concat([ddoc]);
     for (j = 0; j < modifDocs.length; j++) {
@@ -1226,7 +1242,7 @@ couchTests.replication = function(debug) {
     // add another attachment to the ddoc on source
     addAtt(sourceDb, ddoc, "data.dat", att2_data, "application/binary");

-    waitForSeq(sourceDb, targetDb);
+    waitForSeq(sourceDb, targetDb, rep_id);

     copy = targetDb.open(ddoc._id);
     var atts = copy._attachments;
@@ -1263,7 +1279,7 @@ couchTests.replication = function(debug) {
     var newDocs = makeDocs(25, 35);
     populateDb(sourceDb, newDocs, true);

-    waitForSeq(sourceDb, targetDb);
+    waitForSeq(sourceDb, targetDb, rep_id);

     for (j = 0; j < newDocs.length; j++) {
       doc = newDocs[j];
@@ -1282,7 +1298,7 @@ couchTests.replication = function(debug) {
     TEquals(true, sourceDb.deleteDoc(newDocs[0]).ok);
     TEquals(true, sourceDb.deleteDoc(newDocs[6]).ok);

-    waitForSeq(sourceDb, targetDb);
+    waitForSeq(sourceDb, targetDb, rep_id);

     copy = targetDb.open(newDocs[0]._id);
     TEquals(null, copy);
@@ -1317,7 +1333,7 @@ couchTests.replication = function(debug) {
     };
     TEquals(true, sourceDb.save(doc).ok);

-    wait(2000);
+    waitForRepEnd(rep_id);
     copy = targetDb.open(doc._id);
     TEquals(null, copy);
   }
@@ -1359,7 +1375,7 @@ couchTests.replication = function(debug) {

   var tasksAfter = JSON.parse(xhr.responseText);
   TEquals(tasks.length, tasksAfter.length);
-  waitForSeq(sourceDb, targetDb);
+  waitForSeq(sourceDb, targetDb, rep_id);
   T(sourceDb.open("30") !== null);

   // cancel replication

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 8:11 PM, Filipe David Manana <fd...@apache.org> wrote:
> On Mon, Aug 8, 2011 at 11:43 PM, Paul Davis <pa...@gmail.com> wrote:
>> The entire test suite takes about 4 minutes to run on a semi recent
>> MBA. Most of the tests are fairly speedy, but four of them stick out
>> quite drastically:
>>
>> delayed_commits: 15s
>> design_docs: 25s
>> replication: 90s
>> replcaitor_db: 60s
>
> The replication.js test grew a lot after the new replicator was
> introduced. Basically it covers a lot more scenarios then the old
> replication.js test, and tests with larger amounts of documents and
> continuous replications.
> I think this is a good thing and inevitable (due to bug fixes, new
> features, etc).
>
> The replicator_db.js does several server restart calls, which are
> necessary to test this feature.
>
> After Jan's patch to add a "verify installation" feature to Futon, I
> don't think individual tests taking 1, 2 or 5 minutes are an issue, as
> long as they succeed.
> For a database management system, having much more comprehensive tests
> (which mean that they take longer to run) is a good thing.
>
> I agree with everything said in this thread.
>

I only mention the replication tests specifically because it seems
like they spend a lot of time polling database info objects and the
logs fly by without any other log messages. I was mostly wondering if
this was related to a gen_server timeout or commit_after message. On
the other hand, we should also probably start thinking about
hierarchical testing schemes. replication.js is over 1.5K loc which
seems awfully heaving for a single test. These sorts of things will
help when we want to run certain parts of the suite continuously while
hacking and then run the full thing before committing.

Also, I think you're spot on about Jan's patch. We should turn Futon's
tests into a "Your node is functioning suite" and move the test suite
to the CLI so we can be much more specific in our testing.

And randomly it occurs to me that maybe we should re-evaluate our use
of init:restart during testing. I know it gives us a clean slate, but
perhaps having a "randomize test order" would be more useful for
detecting failures that are non-obvious. Granted that introduces
obvious difficulties with incompatible tests (ie things that test
behavior for multiple values of a specific config setting).

>>
>> I haven't dug into these too much yet. The only thing I've noticed is
>> that replication and relplicator_db seem to spend a lot of their time
>> polling various URLs waiting for task completion. Perhaps I'm just
>> being impatient but it seems that each poll lasts an uncessarily long
>> time for a unit tests (3-5s) so I was wondering if we were hitting a
>> timeout or something.
>>
>> If anyone wants to dig into the test suite and see if they can't speed
>> these up or even just split them apart so we know what's taking awhile
>> that'd be super awesome.
>>
>> Also, I've been thinking more and more about beefing up the JavaScript
>> test suite runner and moving more of our browser tests over to
>> dedicated code in those tests. If anyone's interested in hacking on
>> some C and JavaScript against an HTTP API, let me know.
>>
>
>
>
> --
> Filipe David Manana,
> fdmanana@gmail.com, fdmanana@apache.org
>
> "Reasonable men adapt themselves to the world.
>  Unreasonable men adapt the world to themselves.
>  That's why all progress depends on unreasonable men."
>

Re: Futon Test Suite

Posted by Filipe David Manana <fd...@apache.org>.
On Mon, Aug 8, 2011 at 11:43 PM, Paul Davis <pa...@gmail.com> wrote:
> The entire test suite takes about 4 minutes to run on a semi recent
> MBA. Most of the tests are fairly speedy, but four of them stick out
> quite drastically:
>
> delayed_commits: 15s
> design_docs: 25s
> replication: 90s
> replcaitor_db: 60s

The replication.js test grew a lot after the new replicator was
introduced. Basically it covers a lot more scenarios then the old
replication.js test, and tests with larger amounts of documents and
continuous replications.
I think this is a good thing and inevitable (due to bug fixes, new
features, etc).

The replicator_db.js does several server restart calls, which are
necessary to test this feature.

After Jan's patch to add a "verify installation" feature to Futon, I
don't think individual tests taking 1, 2 or 5 minutes are an issue, as
long as they succeed.
For a database management system, having much more comprehensive tests
(which mean that they take longer to run) is a good thing.

I agree with everything said in this thread.

>
> I haven't dug into these too much yet. The only thing I've noticed is
> that replication and relplicator_db seem to spend a lot of their time
> polling various URLs waiting for task completion. Perhaps I'm just
> being impatient but it seems that each poll lasts an uncessarily long
> time for a unit tests (3-5s) so I was wondering if we were hitting a
> timeout or something.
>
> If anyone wants to dig into the test suite and see if they can't speed
> these up or even just split them apart so we know what's taking awhile
> that'd be super awesome.
>
> Also, I've been thinking more and more about beefing up the JavaScript
> test suite runner and moving more of our browser tests over to
> dedicated code in those tests. If anyone's interested in hacking on
> some C and JavaScript against an HTTP API, let me know.
>



-- 
Filipe David Manana,
fdmanana@gmail.com, fdmanana@apache.org

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."

Re: Futon Test Suite

Posted by Jan Lehnardt <ja...@apache.org>.
On Aug 10, 2011, at 12:30 AM, Randall Leeds wrote:

> On Tue, Aug 9, 2011 at 14:48, Paul Davis <pa...@gmail.com>wrote:
> 
>> On Tue, Aug 9, 2011 at 4:04 PM, Dave Cottlehuber <da...@muse.net.nz> wrote:
>>> On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:
>>> 
>>>> Also, yes. I've finally become irritated enough with clearing the
>>>> browser cache between every test that I feel its time to do something
>>>> productive about it.
>>>> 
>>> 
>>> Use private mode /incognito / pr0n mode & just start a new session. I
>>> know thats not helping the root cause!
>>> 
>>> A+
>>> Dave
>>> 
>> 
>> Does that clear the cache? Or use a temporary one for that session? If
>> so that's a much better solution than what i had going.
>> 
>> 
>> Also random side note, moving official JS tests to CLI means that
>> integrating tests from plugin packages will be much easier. Just
>> started hacking on GeoCouch and I'm already wishing we had more
>> tooling here.
>> 
> 
> I'm 100% for running them from the CLI and as part of make check.
> Related: We could make cURL a hard dependency again, but I'd prefer to see
> it stay optional but be required for make check and release verification.

I'm way in favour as well and happy to help if needed.

Cheers
Jan
-- 



Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 5:30 PM, Randall Leeds <ra...@gmail.com> wrote:
> On Tue, Aug 9, 2011 at 14:48, Paul Davis <pa...@gmail.com>wrote:
>
>> On Tue, Aug 9, 2011 at 4:04 PM, Dave Cottlehuber <da...@muse.net.nz> wrote:
>> > On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:
>> >
>> >> Also, yes. I've finally become irritated enough with clearing the
>> >> browser cache between every test that I feel its time to do something
>> >> productive about it.
>> >>
>> >
>> > Use private mode /incognito / pr0n mode & just start a new session. I
>> > know thats not helping the root cause!
>> >
>> > A+
>> > Dave
>> >
>>
>> Does that clear the cache? Or use a temporary one for that session? If
>> so that's a much better solution than what i had going.
>>
>>
>> Also random side note, moving official JS tests to CLI means that
>> integrating tests from plugin packages will be much easier. Just
>> started hacking on GeoCouch and I'm already wishing we had more
>> tooling here.
>>
>
> I'm 100% for running them from the CLI and as part of make check.
> Related: We could make cURL a hard dependency again, but I'd prefer to see
> it stay optional but be required for make check and release verification.
>

I'm not in favor of making curl a hard dependency again. I think a big
ass warning of "YOU CAN RUN OUR TESTS" is all that'd be needed at the
end of ./configure to indicate the issue. The hard part about curl as
supporting really old versions on RHEL or CentOS which I don't think
any of our devs use.

Bottom line, I think it should be optional to build, but required for
us to make a release. This way it only affects the core group and not
random people trying to build on a version of AIX from 2003.

Re: Futon Test Suite

Posted by Randall Leeds <ra...@gmail.com>.
On Tue, Aug 9, 2011 at 14:48, Paul Davis <pa...@gmail.com>wrote:

> On Tue, Aug 9, 2011 at 4:04 PM, Dave Cottlehuber <da...@muse.net.nz> wrote:
> > On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:
> >
> >> Also, yes. I've finally become irritated enough with clearing the
> >> browser cache between every test that I feel its time to do something
> >> productive about it.
> >>
> >
> > Use private mode /incognito / pr0n mode & just start a new session. I
> > know thats not helping the root cause!
> >
> > A+
> > Dave
> >
>
> Does that clear the cache? Or use a temporary one for that session? If
> so that's a much better solution than what i had going.
>
>
> Also random side note, moving official JS tests to CLI means that
> integrating tests from plugin packages will be much easier. Just
> started hacking on GeoCouch and I'm already wishing we had more
> tooling here.
>

I'm 100% for running them from the CLI and as part of make check.
Related: We could make cURL a hard dependency again, but I'd prefer to see
it stay optional but be required for make check and release verification.

Re: Futon Test Suite

Posted by Dave Cottlehuber <da...@muse.net.nz>.
On 10 August 2011 09:48, Paul Davis <pa...@gmail.com> wrote:
> On Tue, Aug 9, 2011 at 4:04 PM, Dave Cottlehuber <da...@muse.net.nz> wrote:
>> On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:
>>
>>> Also, yes. I've finally become irritated enough with clearing the
>>> browser cache between every test that I feel its time to do something
>>> productive about it.
>>>
>>
>> Use private mode /incognito / pr0n mode & just start a new session. I
>> know thats not helping the root cause!
>>
>> A+
>> Dave
>>
>
> Does that clear the cache? Or use a temporary one for that session? If
> so that's a much better solution than what i had going.

It seems to use a temp one at least in FF and Safari on Mac. I just
close the window, open a new one & re-run.

A+
Dave

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 4:04 PM, Dave Cottlehuber <da...@muse.net.nz> wrote:
> On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:
>
>> Also, yes. I've finally become irritated enough with clearing the
>> browser cache between every test that I feel its time to do something
>> productive about it.
>>
>
> Use private mode /incognito / pr0n mode & just start a new session. I
> know thats not helping the root cause!
>
> A+
> Dave
>

Does that clear the cache? Or use a temporary one for that session? If
so that's a much better solution than what i had going.


Also random side note, moving official JS tests to CLI means that
integrating tests from plugin packages will be much easier. Just
started hacking on GeoCouch and I'm already wishing we had more
tooling here.

Re: Futon Test Suite

Posted by Dave Cottlehuber <da...@muse.net.nz>.
On 10 August 2011 03:19, Paul Davis <pa...@gmail.com> wrote:

> Also, yes. I've finally become irritated enough with clearing the
> browser cache between every test that I feel its time to do something
> productive about it.
>

Use private mode /incognito / pr0n mode & just start a new session. I
know thats not helping the root cause!

A+
Dave

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 9:38 AM, Adam Kocoloski <ko...@apache.org> wrote:
> On Aug 9, 2011, at 4:48 AM, Paul Davis wrote:
>
>> On Tue, Aug 9, 2011 at 2:40 AM, Robert Dionne
>> <di...@dionne-associates.com> wrote:
>>>>
>>>>
>>>> Also, I've been thinking more and more about beefing up the JavaScript
>>>> test suite runner and moving more of our browser tests over to
>>>> dedicated code in those tests. If anyone's interested in hacking on
>>>> some C and JavaScript against an HTTP API, let me know.
>>>
>>>
>>> Paul,
>>>
>>>  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
>>> test/javascript/test  and started looking at the small handful that fail.
>>>
>>>   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
>>> one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
>>> If so I'm happy to take this on.
>>>
>>>   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
>>> only testing but also to help with reading and understanding what the code does.
>>>
>>> Bob
>>>
>>>
>>> [1] https://github.com/bdionne/couchdb/tree/cli-tests
>>>
>>>
>>
>> Bob,
>>
>> Exactly what I was thinking. By having our test suite have as little
>> code between the socket and the the test as possible we can ensure
>> that the tests are testing CouchDB and not some random change in the
>> behavior of our favorite web browser. I would definitely expect these
>> tests to be part of make check and hence part of the release
>> procedure.
>>
>> My current half formed thoughts are to basically split our test suite
>> into two halves. Tests that are in Erlang and are testing internals,
>> and tests that go through the HTTP interface. I love me some Erlang,
>> but I've not been think of an elegant way to make it easy to run lots
>> of HTTP tests.
>>
>> As to eunit, I'm not sure. I'm really not a huge fan of it, especially
>> mixing implementation and test code. I know rebar can separate them,
>> so its at least possible to get around that. I'd like to have a
>> unified environment for Erlang tests though. And TAP at seems like
>> it'd be easier to interface with non-Erlang tooling if we ever get
>> around to build matrices and what not. But I'm not opposed to it on
>> religious grounds if that's what people want to contribute.
>
> OTP ships with eunit_surefire[1] so interfacing with general test infrastructures isn't really an issue.  I had to dig a little deeper to find a decent TAP consumer for Jenkins and ended up executing the tests from a Perl script using TAP::Harness::JUnit.  I'll grant that the TAP output is easier for a human to digest than the eunit output.
>
> I imagine testing the private module functions is a topic for a good flamewar.  I find it useful, but maybe that's a sign I've got too much logic in a particular module.  Either way,  a viable alternative to writing unit tests that execute in the browser is something this project really needs.  Cheers,
>
> Adam
>
> [1]: http://www.erlang.org/doc/man/eunit_surefire.html

Way to totally destroy on of my last hopes of not using eunit. Thanks for that.

Also, yes. I've finally become irritated enough with clearing the
browser cache between every test that I feel its time to do something
productive about it.

Re: Futon Test Suite

Posted by Adam Kocoloski <ko...@apache.org>.
On Aug 9, 2011, at 4:48 AM, Paul Davis wrote:

> On Tue, Aug 9, 2011 at 2:40 AM, Robert Dionne
> <di...@dionne-associates.com> wrote:
>>> 
>>> 
>>> Also, I've been thinking more and more about beefing up the JavaScript
>>> test suite runner and moving more of our browser tests over to
>>> dedicated code in those tests. If anyone's interested in hacking on
>>> some C and JavaScript against an HTTP API, let me know.
>> 
>> 
>> Paul,
>> 
>>  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
>> test/javascript/test  and started looking at the small handful that fail.
>> 
>>   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
>> one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
>> If so I'm happy to take this on.
>> 
>>   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
>> only testing but also to help with reading and understanding what the code does.
>> 
>> Bob
>> 
>> 
>> [1] https://github.com/bdionne/couchdb/tree/cli-tests
>> 
>> 
> 
> Bob,
> 
> Exactly what I was thinking. By having our test suite have as little
> code between the socket and the the test as possible we can ensure
> that the tests are testing CouchDB and not some random change in the
> behavior of our favorite web browser. I would definitely expect these
> tests to be part of make check and hence part of the release
> procedure.
> 
> My current half formed thoughts are to basically split our test suite
> into two halves. Tests that are in Erlang and are testing internals,
> and tests that go through the HTTP interface. I love me some Erlang,
> but I've not been think of an elegant way to make it easy to run lots
> of HTTP tests.
> 
> As to eunit, I'm not sure. I'm really not a huge fan of it, especially
> mixing implementation and test code. I know rebar can separate them,
> so its at least possible to get around that. I'd like to have a
> unified environment for Erlang tests though. And TAP at seems like
> it'd be easier to interface with non-Erlang tooling if we ever get
> around to build matrices and what not. But I'm not opposed to it on
> religious grounds if that's what people want to contribute.

OTP ships with eunit_surefire[1] so interfacing with general test infrastructures isn't really an issue.  I had to dig a little deeper to find a decent TAP consumer for Jenkins and ended up executing the tests from a Perl script using TAP::Harness::JUnit.  I'll grant that the TAP output is easier for a human to digest than the eunit output.

I imagine testing the private module functions is a topic for a good flamewar.  I find it useful, but maybe that's a sign I've got too much logic in a particular module.  Either way,  a viable alternative to writing unit tests that execute in the browser is something this project really needs.  Cheers,

Adam

[1]: http://www.erlang.org/doc/man/eunit_surefire.html

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 4:11 AM, Jens Rantil <je...@telavox.se> wrote:
> Hi,
>
> Does JS engine come bundled with HTTP request implementation or is that browser specific? If they do, you could execute the browser tests by running them through a JavaScript engine directly? That way you can preserve the JavaScript code, still run the tests and skip your favorite browser.
>
> My five cents,
>
> /Jens
>
> Sent from my cellphone
>

This is precisely how our current JS CLI test runner works. There's a
directory ./test/javascript/ in the tarballs and in SVN that contains
the code to turn CouchJS into a minimal JS environment with a fake XHR
class. The idea currently floating in my head involves abandoning the
"pretend to be a browser" approach and is more focused on creating a
useful HTTP testing environment in JS and then refactoring our test
code to use that instead.

> On 9 aug 2011, at 10:49, "Paul Davis" <pa...@gmail.com> wrote:
>
>> On Tue, Aug 9, 2011 at 2:40 AM, Robert Dionne
>> <di...@dionne-associates.com> wrote:
>>>>
>>>>
>>>> Also, I've been thinking more and more about beefing up the JavaScript
>>>> test suite runner and moving more of our browser tests over to
>>>> dedicated code in those tests. If anyone's interested in hacking on
>>>> some C and JavaScript against an HTTP API, let me know.
>>>
>>>
>>> Paul,
>>>
>>>  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
>>> test/javascript/test  and started looking at the small handful that fail.
>>>
>>>   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
>>> one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
>>> If so I'm happy to take this on.
>>>
>>>   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
>>> only testing but also to help with reading and understanding what the code does.
>>>
>>> Bob
>>>
>>>
>>> [1] https://github.com/bdionne/couchdb/tree/cli-tests
>>>
>>>
>>
>> Bob,
>>
>> Exactly what I was thinking. By having our test suite have as little
>> code between the socket and the the test as possible we can ensure
>> that the tests are testing CouchDB and not some random change in the
>> behavior of our favorite web browser. I would definitely expect these
>> tests to be part of make check and hence part of the release
>> procedure.
>>
>> My current half formed thoughts are to basically split our test suite
>> into two halves. Tests that are in Erlang and are testing internals,
>> and tests that go through the HTTP interface. I love me some Erlang,
>> but I've not been think of an elegant way to make it easy to run lots
>> of HTTP tests.
>>
>> As to eunit, I'm not sure. I'm really not a huge fan of it, especially
>> mixing implementation and test code. I know rebar can separate them,
>> so its at least possible to get around that. I'd like to have a
>> unified environment for Erlang tests though. And TAP at seems like
>> it'd be easier to interface with non-Erlang tooling if we ever get
>> around to build matrices and what not. But I'm not opposed to it on
>> religious grounds if that's what people want to contribute.
>

Re: Futon Test Suite

Posted by Jens Rantil <je...@telavox.se>.
Hi,

Does JS engine come bundled with HTTP request implementation or is that browser specific? If they do, you could execute the browser tests by running them through a JavaScript engine directly? That way you can preserve the JavaScript code, still run the tests and skip your favorite browser.

My five cents,

/Jens

Sent from my cellphone

On 9 aug 2011, at 10:49, "Paul Davis" <pa...@gmail.com> wrote:

> On Tue, Aug 9, 2011 at 2:40 AM, Robert Dionne
> <di...@dionne-associates.com> wrote:
>>> 
>>> 
>>> Also, I've been thinking more and more about beefing up the JavaScript
>>> test suite runner and moving more of our browser tests over to
>>> dedicated code in those tests. If anyone's interested in hacking on
>>> some C and JavaScript against an HTTP API, let me know.
>> 
>> 
>> Paul,
>> 
>>  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
>> test/javascript/test  and started looking at the small handful that fail.
>> 
>>   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
>> one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
>> If so I'm happy to take this on.
>> 
>>   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
>> only testing but also to help with reading and understanding what the code does.
>> 
>> Bob
>> 
>> 
>> [1] https://github.com/bdionne/couchdb/tree/cli-tests
>> 
>> 
> 
> Bob,
> 
> Exactly what I was thinking. By having our test suite have as little
> code between the socket and the the test as possible we can ensure
> that the tests are testing CouchDB and not some random change in the
> behavior of our favorite web browser. I would definitely expect these
> tests to be part of make check and hence part of the release
> procedure.
> 
> My current half formed thoughts are to basically split our test suite
> into two halves. Tests that are in Erlang and are testing internals,
> and tests that go through the HTTP interface. I love me some Erlang,
> but I've not been think of an elegant way to make it easy to run lots
> of HTTP tests.
> 
> As to eunit, I'm not sure. I'm really not a huge fan of it, especially
> mixing implementation and test code. I know rebar can separate them,
> so its at least possible to get around that. I'd like to have a
> unified environment for Erlang tests though. And TAP at seems like
> it'd be easier to interface with non-Erlang tooling if we ever get
> around to build matrices and what not. But I'm not opposed to it on
> religious grounds if that's what people want to contribute.

Re: Futon Test Suite

Posted by Paul Davis <pa...@gmail.com>.
On Tue, Aug 9, 2011 at 2:40 AM, Robert Dionne
<di...@dionne-associates.com> wrote:
>>
>>
>> Also, I've been thinking more and more about beefing up the JavaScript
>> test suite runner and moving more of our browser tests over to
>> dedicated code in those tests. If anyone's interested in hacking on
>> some C and JavaScript against an HTTP API, let me know.
>
>
> Paul,
>
>  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
> test/javascript/test  and started looking at the small handful that fail.
>
>   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
> one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
> If so I'm happy to take this on.
>
>   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
> only testing but also to help with reading and understanding what the code does.
>
> Bob
>
>
> [1] https://github.com/bdionne/couchdb/tree/cli-tests
>
>

Bob,

Exactly what I was thinking. By having our test suite have as little
code between the socket and the the test as possible we can ensure
that the tests are testing CouchDB and not some random change in the
behavior of our favorite web browser. I would definitely expect these
tests to be part of make check and hence part of the release
procedure.

My current half formed thoughts are to basically split our test suite
into two halves. Tests that are in Erlang and are testing internals,
and tests that go through the HTTP interface. I love me some Erlang,
but I've not been think of an elegant way to make it easy to run lots
of HTTP tests.

As to eunit, I'm not sure. I'm really not a huge fan of it, especially
mixing implementation and test code. I know rebar can separate them,
so its at least possible to get around that. I'd like to have a
unified environment for Erlang tests though. And TAP at seems like
it'd be easier to interface with non-Erlang tooling if we ever get
around to build matrices and what not. But I'm not opposed to it on
religious grounds if that's what people want to contribute.

Re: Futon Test Suite

Posted by Robert Dionne <di...@dionne-associates.com>.
> 
> 
> Also, I've been thinking more and more about beefing up the JavaScript
> test suite runner and moving more of our browser tests over to
> dedicated code in those tests. If anyone's interested in hacking on
> some C and JavaScript against an HTTP API, let me know.


Paul,

  Jan and I talked about this a few times and I started a branch[1] along that idea. So far all I did was make a copy of the then current Futon tests into
test/javascript/test  and started looking at the small handful that fail. 

   The browser tests are great (any test is good) but they have too many browser  dependent quirks, or at least I assume that because of the pleasant surprise
one gets when they all run. So I think the goal of these runner tests would be some sort of official HTTP API suite that's part of "make check". Would you agree?
If so I'm happy to take this on.

   Also I've found eunit to be helpful in BigCouch and wondering how hard it would be to support eunit in couchdb. Having tests in the modules is very good for not
only testing but also to help with reading and understanding what the code does.

Bob   


[1] https://github.com/bdionne/couchdb/tree/cli-tests