You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@couchdb.apache.org by Damien Katz <da...@apache.org> on 2008/07/23 21:01:28 UTC

Need help debugging mochiweb/Safari HTTP problems

Right now we are having a major problem with HTTP request being  
retried. This problem is responsible for the test suite failures seen  
constantly in Safari (though others report similar failures in  
Firefox, I've not seen them myself). And not just test suite failures,  
some are seeing the same behavior in production.

The major symptoms of this problem:
1. Mysterious conflict - You get a conflict error saving a document to  
the db. When you examine the existing db document, it's already got  
your changes.
2. Duplicate document - When creating a new document via POST, you  
occasionally get 2 new documents created instead of one.

#1 is annoying but not too serious, no data is lost or corrupted. #2  
is a bit more dangerous, because you could consider the database  
corrupted by having the duplicate document. (depends on what problems  
it would cause for your app)

What is happening in both these cases is the HTTP requests are getting  
sent and processed twice. The first request is given to CouchDB and is  
handled, but when CouchDB attempts to send the response, the  
connection is reset (apparently). Then another identical HTTP request  
comes in and the request is processed again.

I am not a TCP expert. but by viewing the network requests via  
tcpdump, it is obvious the request packets, 1 header and 1 body  
packet, are getting resent from the client to the server. I do not  
know if the packets are being resent at the TCP level, or if the HTTP  
client in safari is retrying the request after getting a TCP error.

I do not know why the network error or subsequent resend is happening.  
I can only confirm that it *is* happening. If this is at the TCP  
level, then it means we definitely need to do away with the non- 
idempotent POST to create new documents.

I think we do anyway though. While this network error should not be  
happening, it did expose an interesting problem with our use of POST  
for document creation. The problem is the generated id for the  
document is a UUID generated server side, so the server has no way to  
distinguish if a request is a new request or a resend of an already  
processed request, and so generates another UUID and thus creates  
another new document. But if the UUID is generated by the client, then  
the resend will cause a conflict error, that UUID already exists in  
the DB, thus eliminating the duplicate data.

However, we still need to figure out why this is happening in the  
first place. Why is the connection being reset and why is the request  
being retried?

If anyone want to try to debug this, here is what I've been doing:
1. Run a packet sniffer for local port 5984 and start couchdb
2. Got to http://127.0.0.1/_utils/,  click the "Test Suite" link
3. Run the "basics" test manually until you see a "conflict error"  
exception in test result. (This exception stops the test executing. I  
don't try to debug other test failures, since the test keeps on  
running after the failure)
4. The last few requests will be the duplicated requests. There is  
information about the packets, but I don't know how to interpret it.

Any help and input appreciated.

-Damien

Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Christopher Lenz <cm...@gmx.de>.
On 25.07.2008, at 10:21, Christopher Lenz wrote:
> On 25.07.2008, at 10:17, Christopher Lenz wrote:
>> On 25.07.2008, at 03:18, Damien Katz wrote:
>>> This fix is to put the whole chunk in one gen_tcp:send/2 call,  
>>> which I think forces it into a single TCP packet and therefore  
>>> CRLF is always available immediately. The fix is simple and I  
>>> think it also be more efficient for most use cases. However I  
>>> think there might still be a flaw in Safari here that could bite.  
>>> I also think the idempotence work for document creation is still  
>>> necessary.
>>>
>>> I want to take this fix along with the recent replication and  
>>> compaction bug fixes and create a 0.8.1.
>>
>> So this is a patch to the mochiweb source, right? Like this:
>
> Doh, I missed that you had already committed the change, never mind.
>
> I'll try to get this change accepted upstream.

Done:

   <http://code.google.com/p/mochiweb/issues/detail?id=16>

Cheers,
--
Christopher Lenz
   cmlenz at gmx.de
   http://www.cmlenz.net/


Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Christopher Lenz <cm...@gmx.de>.
On 25.07.2008, at 10:17, Christopher Lenz wrote:
> On 25.07.2008, at 03:18, Damien Katz wrote:
>> This fix is to put the whole chunk in one gen_tcp:send/2 call,  
>> which I think forces it into a single TCP packet and therefore CRLF  
>> is always available immediately. The fix is simple and I think it  
>> also be more efficient for most use cases. However I think there  
>> might still be a flaw in Safari here that could bite. I also think  
>> the idempotence work for document creation is still necessary.
>>
>> I want to take this fix along with the recent replication and  
>> compaction bug fixes and create a 0.8.1.
>
> So this is a patch to the mochiweb source, right? Like this:

Doh, I missed that you had already committed the change, never mind.

I'll try to get this change accepted upstream.

Cheers,
--
Christopher Lenz
   cmlenz at gmx.de
   http://www.cmlenz.net/


Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Christopher Lenz <cm...@gmx.de>.
On 25.07.2008, at 03:18, Damien Katz wrote:
> Looks like I found a fix for the bug, though I'm not 100% sure what  
> the actual bug is. The fix was to change mochiweb to send the HTTP  
> chunk in a single gen_tcp:send/2 call. Previously it sent the length  
> in one call, then the data followed by data in another call.
>
> My theory of the bug is the Safari HTTP client is getting the  
> chunked end marker in 2 packets.  It gets the 0 length + CRLF line  
> in a one packet and when it asks for the CRLF in the next packet its  
> not there yet, so it just skips it (for reasons yet unknown). But  
> the CRLF is still coming, and then when the client goes ahead and  
> makes the next request and tries to get the next response and it  
> instead gets that previous CRLF it had skipped. Because it gets a  
> weird unexpected response, it retries the request.
>
> This fix is to put the whole chunk in one gen_tcp:send/2 call, which  
> I think forces it into a single TCP packet and therefore CRLF is  
> always available immediately. The fix is simple and I think it also  
> be more efficient for most use cases. However I think there might  
> still be a flaw in Safari here that could bite. I also think the  
> idempotence work for document creation is still necessary.
>
> I want to take this fix along with the recent replication and  
> compaction bug fixes and create a 0.8.1.

So this is a patch to the mochiweb source, right? Like this:

Index: src/mochiweb/mochiweb_response.erl
===================================================================
--- src/mochiweb/mochiweb_response.erl	(revision 678916)
+++ src/mochiweb/mochiweb_response.erl	(working copy)
@@ -50,8 +50,7 @@
      case Request:get(version) of
          Version when Version >= {1, 1} ->
              Length = iolist_size(Data),
-            send(io_lib:format("~.16b\r\n", [Length])),
-            send([Data, <<"\r\n">>]);
+            send([io_lib:format("~.16b\r\n", [Length]), Data, <<"\r 
\n">>]);
          _ ->
              send(Data)
      end.

?

Cheers,
--
Christopher Lenz
   cmlenz at gmx.de
   http://www.cmlenz.net/


Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Damien Katz <da...@apache.org>.
Looks like I found a fix for the bug, though I'm not 100% sure what  
the actual bug is. The fix was to change mochiweb to send the HTTP  
chunk in a single gen_tcp:send/2 call. Previously it sent the length  
in one call, then the data followed by data in another call.

My theory of the bug is the Safari HTTP client is getting the chunked  
end marker in 2 packets.  It gets the 0 length + CRLF line in a one  
packet and when it asks for the CRLF in the next packet its not there  
yet, so it just skips it (for reasons yet unknown). But the CRLF is  
still coming, and then when the client goes ahead and makes the next  
request and tries to get the next response and it instead gets that  
previous CRLF it had skipped. Because it gets a weird unexpected  
response, it retries the request.

This fix is to put the whole chunk in one gen_tcp:send/2 call, which I  
think forces it into a single TCP packet and therefore CRLF is always  
available immediately. The fix is simple and I think it also be more  
efficient for most use cases. However I think there might still be a  
flaw in Safari here that could bite. I also think the idempotence work  
for document creation is still necessary.

I want to take this fix along with the recent replication and  
compaction bug fixes and create a 0.8.1.

-Damien


On Jul 23, 2008, at 3:01 PM, Damien Katz wrote:

> Right now we are having a major problem with HTTP request being  
> retried. This problem is responsible for the test suite failures  
> seen constantly in Safari (though others report similar failures in  
> Firefox, I've not seen them myself). And not just test suite  
> failures, some are seeing the same behavior in production.
>
> The major symptoms of this problem:
> 1. Mysterious conflict - You get a conflict error saving a document  
> to the db. When you examine the existing db document, it's already  
> got your changes.
> 2. Duplicate document - When creating a new document via POST, you  
> occasionally get 2 new documents created instead of one.
>
> #1 is annoying but not too serious, no data is lost or corrupted. #2  
> is a bit more dangerous, because you could consider the database  
> corrupted by having the duplicate document. (depends on what  
> problems it would cause for your app)
>
> What is happening in both these cases is the HTTP requests are  
> getting sent and processed twice. The first request is given to  
> CouchDB and is handled, but when CouchDB attempts to send the  
> response, the connection is reset (apparently). Then another  
> identical HTTP request comes in and the request is processed again.
>
> I am not a TCP expert. but by viewing the network requests via  
> tcpdump, it is obvious the request packets, 1 header and 1 body  
> packet, are getting resent from the client to the server. I do not  
> know if the packets are being resent at the TCP level, or if the  
> HTTP client in safari is retrying the request after getting a TCP  
> error.
>
> I do not know why the network error or subsequent resend is  
> happening. I can only confirm that it *is* happening. If this is at  
> the TCP level, then it means we definitely need to do away with the  
> non-idempotent POST to create new documents.
>
> I think we do anyway though. While this network error should not be  
> happening, it did expose an interesting problem with our use of POST  
> for document creation. The problem is the generated id for the  
> document is a UUID generated server side, so the server has no way  
> to distinguish if a request is a new request or a resend of an  
> already processed request, and so generates another UUID and thus  
> creates another new document. But if the UUID is generated by the  
> client, then the resend will cause a conflict error, that UUID  
> already exists in the DB, thus eliminating the duplicate data.
>
> However, we still need to figure out why this is happening in the  
> first place. Why is the connection being reset and why is the  
> request being retried?
>
> If anyone want to try to debug this, here is what I've been doing:
> 1. Run a packet sniffer for local port 5984 and start couchdb
> 2. Got to http://127.0.0.1/_utils/,  click the "Test Suite" link
> 3. Run the "basics" test manually until you see a "conflict error"  
> exception in test result. (This exception stops the test executing.  
> I don't try to debug other test failures, since the test keeps on  
> running after the failure)
> 4. The last few requests will be the duplicated requests. There is  
> information about the packets, but I don't know how to interpret it.
>
> Any help and input appreciated.
>
> -Damien


Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Michael Hendricks <mi...@ndrix.org>.
On Thu, Jul 24, 2008 at 05:09:59PM -0400, Damien Katz wrote:
> With Safari running the CouchDB 0.7.2 test suite, everything worked 
> perfectly. That version of CouchDB uses the Inets HTTP library.

Has anyone tried running with different versions of the mochiweb
library?  Perhaps we'll get lucky and the problem can be bisected to a
specific commit in mochiweb.  If nobody's tried this and there're no
obvious reason to avoid the effort, I may give it a try tomorrow.

-- 
Michael

Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Damien Katz <da...@apache.org>.
I still haven't made any progress on this issue.

 From debugging the raw packets, it appears Safari is reseting the  
connection after sending the complete request but before accepting the  
response, then it resends the request. On Linux other users report the  
problems with Firefox, but I don't think it happens on Firefox from OS  
X for anyone.

With Safari running the CouchDB 0.7.2 test suite, everything worked  
perfectly. That version of CouchDB uses the Inets HTTP library.

-Damien


On Jul 23, 2008, at 3:01 PM, Damien Katz wrote:

> Right now we are having a major problem with HTTP request being  
> retried. This problem is responsible for the test suite failures  
> seen constantly in Safari (though others report similar failures in  
> Firefox, I've not seen them myself). And not just test suite  
> failures, some are seeing the same behavior in production.
>
> The major symptoms of this problem:
> 1. Mysterious conflict - You get a conflict error saving a document  
> to the db. When you examine the existing db document, it's already  
> got your changes.
> 2. Duplicate document - When creating a new document via POST, you  
> occasionally get 2 new documents created instead of one.
>
> #1 is annoying but not too serious, no data is lost or corrupted. #2  
> is a bit more dangerous, because you could consider the database  
> corrupted by having the duplicate document. (depends on what  
> problems it would cause for your app)
>
> What is happening in both these cases is the HTTP requests are  
> getting sent and processed twice. The first request is given to  
> CouchDB and is handled, but when CouchDB attempts to send the  
> response, the connection is reset (apparently). Then another  
> identical HTTP request comes in and the request is processed again.
>
> I am not a TCP expert. but by viewing the network requests via  
> tcpdump, it is obvious the request packets, 1 header and 1 body  
> packet, are getting resent from the client to the server. I do not  
> know if the packets are being resent at the TCP level, or if the  
> HTTP client in safari is retrying the request after getting a TCP  
> error.
>
> I do not know why the network error or subsequent resend is  
> happening. I can only confirm that it *is* happening. If this is at  
> the TCP level, then it means we definitely need to do away with the  
> non-idempotent POST to create new documents.
>
> I think we do anyway though. While this network error should not be  
> happening, it did expose an interesting problem with our use of POST  
> for document creation. The problem is the generated id for the  
> document is a UUID generated server side, so the server has no way  
> to distinguish if a request is a new request or a resend of an  
> already processed request, and so generates another UUID and thus  
> creates another new document. But if the UUID is generated by the  
> client, then the resend will cause a conflict error, that UUID  
> already exists in the DB, thus eliminating the duplicate data.
>
> However, we still need to figure out why this is happening in the  
> first place. Why is the connection being reset and why is the  
> request being retried?
>
> If anyone want to try to debug this, here is what I've been doing:
> 1. Run a packet sniffer for local port 5984 and start couchdb
> 2. Got to http://127.0.0.1/_utils/,  click the "Test Suite" link
> 3. Run the "basics" test manually until you see a "conflict error"  
> exception in test result. (This exception stops the test executing.  
> I don't try to debug other test failures, since the test keeps on  
> running after the failure)
> 4. The last few requests will be the duplicated requests. There is  
> information about the packets, but I don't know how to interpret it.
>
> Any help and input appreciated.
>
> -Damien


Fwd: Need help debugging mochiweb/Safari HTTP problems

Posted by Christopher Lenz <cm...@gmx.de>.
Hey all,

since we've moved CouchDB to MochiWeb a couple months ago we've been  
getting intermittent errors that seem to be at the TCP or low-level  
HTTP level. We've not had much luck so far figuring out the cause the  
problem, but maybe someone here has an idea; please see Damien's  
description below...

Begin forwarded message:
> From: Damien Katz <da...@apache.org>
> Date: 23. Juli 2008 21:01:28 MESZ
> To: couchdb-dev@incubator.apache.org
> Subject: Need help debugging mochiweb/Safari HTTP problems
> Reply-To: couchdb-dev@incubator.apache.org
>
> Right now we are having a major problem with HTTP request being  
> retried. This problem is responsible for the test suite failures  
> seen constantly in Safari (though others report similar failures in  
> Firefox, I've not seen them myself). And not just test suite  
> failures, some are seeing the same behavior in production.
>
> The major symptoms of this problem:
> 1. Mysterious conflict - You get a conflict error saving a document  
> to the db. When you examine the existing db document, it's already  
> got your changes.
> 2. Duplicate document - When creating a new document via POST, you  
> occasionally get 2 new documents created instead of one.
>
> #1 is annoying but not too serious, no data is lost or corrupted. #2  
> is a bit more dangerous, because you could consider the database  
> corrupted by having the duplicate document. (depends on what  
> problems it would cause for your app)
>
> What is happening in both these cases is the HTTP requests are  
> getting sent and processed twice. The first request is given to  
> CouchDB and is handled, but when CouchDB attempts to send the  
> response, the connection is reset (apparently). Then another  
> identical HTTP request comes in and the request is processed again.
>
> I am not a TCP expert. but by viewing the network requests via  
> tcpdump, it is obvious the request packets, 1 header and 1 body  
> packet, are getting resent from the client to the server. I do not  
> know if the packets are being resent at the TCP level, or if the  
> HTTP client in safari is retrying the request after getting a TCP  
> error.
>
> I do not know why the network error or subsequent resend is  
> happening. I can only confirm that it *is* happening. If this is at  
> the TCP level, then it means we definitely need to do away with the  
> non-idempotent POST to create new documents.
>
> I think we do anyway though. While this network error should not be  
> happening, it did expose an interesting problem with our use of POST  
> for document creation. The problem is the generated id for the  
> document is a UUID generated server side, so the server has no way  
> to distinguish if a request is a new request or a resend of an  
> already processed request, and so generates another UUID and thus  
> creates another new document. But if the UUID is generated by the  
> client, then the resend will cause a conflict error, that UUID  
> already exists in the DB, thus eliminating the duplicate data.
>
> However, we still need to figure out why this is happening in the  
> first place. Why is the connection being reset and why is the  
> request being retried?
>
> If anyone want to try to debug this, here is what I've been doing:
> 1. Run a packet sniffer for local port 5984 and start couchdb
> 2. Got to http://127.0.0.1/_utils/,  click the "Test Suite" link
> 3. Run the "basics" test manually until you see a "conflict error"  
> exception in test result. (This exception stops the test executing.  
> I don't try to debug other test failures, since the test keeps on  
> running after the failure)
> 4. The last few requests will be the duplicated requests. There is  
> information about the packets, but I don't know how to interpret it.
>
> Any help and input appreciated.
>
> -Damien


Thanks,
--
Christopher Lenz
   cmlenz at gmx.de
   http://www.cmlenz.net/



Re: Need help debugging mochiweb/Safari HTTP problems

Posted by David King <dk...@ketralnis.com>.
> I'm also thinking we'll keep the current POST behavior too, since  
> it's already there and works. We'll just document that it should not  
> be used generally. Or maybe we should just rip it out. I can't  
> decide just yet :)

I'm all for leaving it in with a "buyer beware" label

Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Damien Katz <da...@apache.org>.
On Jul 23, 2008, at 5:05 PM, Randall Leeds wrote:

> Maybe the answer is to allow both?
> If accessing from a language which can generate a sufficiently good  
> UUID,
> the application could generate it.
> Also add a mechanism to request a UUID from CouchDB in cases where  
> it's not
> possible to generate one.

Yes, asking CouchDB for the UUID is completely optional. Client can  
give docs any ID they want.

I'm also thinking we'll keep the current POST behavior too, since it's  
already there and works. We'll just document that it should not be  
used generally. Or maybe we should just rip it out. I can't decide  
just yet :)

-Damien

>
>
> On Wed, Jul 23, 2008 at 4:57 PM, Damien Katz <da...@apache.org>  
> wrote:
>
>>
>> On Jul 23, 2008, at 3:24 PM, Randall Leeds wrote:
>>
>> On Wed, Jul 23, 2008 at 3:01 PM, Damien Katz <da...@apache.org>  
>> wrote:
>>>
>>> document creation. The problem is the generated id for the  
>>> document is a
>>>> UUID generated server side, so the server has no way to  
>>>> distinguish if a
>>>> request is a new request or a resend of an already processed  
>>>> request, and
>>>> so
>>>> generates another UUID and thus creates another new document. But  
>>>> if the
>>>> UUID is generated by the client, then the resend will cause a  
>>>> conflict
>>>> error, that UUID already exists in the DB, thus eliminating the  
>>>> duplicate
>>>> data.
>>>>
>>>>
>>> It seems to me the easiest solution is that the client should  
>>> probably be
>>> responsible for generating UUIDs.
>>> Is there a counter-argument that indicates CouchDB being  
>>> responsible for
>>> this? The only one I come up with quickly is that it puts an extra  
>>> burden
>>> on
>>> the client. Not such a huge burden though. As far as the server  
>>> goes,
>>> client
>>> generation seems to adhere to the wonderful tenet K.I.S.S.
>>>
>>>
>> The only problem with this approach is there is no standard way of
>> generating a UUID in a browser. CouchDB uses a crypto level RNG to  
>> create
>> the UUIDs, which is pretty much mandatory to minimize spurious  
>> conflicts.
>> But generating true UUIDs isn't possible in most browsers (the  
>> entropy of
>> the browser PRNG generator is usually significantly below the 128  
>> bits
>> possible for a UUID).
>>
>> One idea is CouchDB can generate the UUID still in a separate step.  
>> So the
>> client first asks the server to generate a UUID, then the client  
>> uses that
>> UUID to save the document. It's inefficient in that it would  
>> require two
>> transactions, but it can make this more efficient if the client  
>> library (e.g
>> couch.js) pre-requests UUIDs in bulk, and then keeps the un-used  
>> ones around
>> in memory until more are needed.
>>
>> -Damien
>>


Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Randall Leeds <ra...@gmail.com>.
Maybe the answer is to allow both?
If accessing from a language which can generate a sufficiently good UUID,
the application could generate it.
Also add a mechanism to request a UUID from CouchDB in cases where it's not
possible to generate one.

On Wed, Jul 23, 2008 at 4:57 PM, Damien Katz <da...@apache.org> wrote:

>
> On Jul 23, 2008, at 3:24 PM, Randall Leeds wrote:
>
>  On Wed, Jul 23, 2008 at 3:01 PM, Damien Katz <da...@apache.org> wrote:
>>
>>  document creation. The problem is the generated id for the document is a
>>> UUID generated server side, so the server has no way to distinguish if a
>>> request is a new request or a resend of an already processed request, and
>>> so
>>> generates another UUID and thus creates another new document. But if the
>>> UUID is generated by the client, then the resend will cause a conflict
>>> error, that UUID already exists in the DB, thus eliminating the duplicate
>>> data.
>>>
>>>
>> It seems to me the easiest solution is that the client should probably be
>> responsible for generating UUIDs.
>> Is there a counter-argument that indicates CouchDB being responsible for
>> this? The only one I come up with quickly is that it puts an extra burden
>> on
>> the client. Not such a huge burden though. As far as the server goes,
>> client
>> generation seems to adhere to the wonderful tenet K.I.S.S.
>>
>>
> The only problem with this approach is there is no standard way of
> generating a UUID in a browser. CouchDB uses a crypto level RNG to create
> the UUIDs, which is pretty much mandatory to minimize spurious conflicts.
> But generating true UUIDs isn't possible in most browsers (the entropy of
> the browser PRNG generator is usually significantly below the 128 bits
> possible for a UUID).
>
> One idea is CouchDB can generate the UUID still in a separate step. So the
> client first asks the server to generate a UUID, then the client uses that
> UUID to save the document. It's inefficient in that it would require two
> transactions, but it can make this more efficient if the client library (e.g
> couch.js) pre-requests UUIDs in bulk, and then keeps the un-used ones around
> in memory until more are needed.
>
> -Damien
>

Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Damien Katz <da...@apache.org>.
On Jul 23, 2008, at 3:24 PM, Randall Leeds wrote:

> On Wed, Jul 23, 2008 at 3:01 PM, Damien Katz <da...@apache.org>  
> wrote:
>
>> document creation. The problem is the generated id for the document  
>> is a
>> UUID generated server side, so the server has no way to distinguish  
>> if a
>> request is a new request or a resend of an already processed  
>> request, and so
>> generates another UUID and thus creates another new document. But  
>> if the
>> UUID is generated by the client, then the resend will cause a  
>> conflict
>> error, that UUID already exists in the DB, thus eliminating the  
>> duplicate
>> data.
>>
>
> It seems to me the easiest solution is that the client should  
> probably be
> responsible for generating UUIDs.
> Is there a counter-argument that indicates CouchDB being responsible  
> for
> this? The only one I come up with quickly is that it puts an extra  
> burden on
> the client. Not such a huge burden though. As far as the server  
> goes, client
> generation seems to adhere to the wonderful tenet K.I.S.S.
>

The only problem with this approach is there is no standard way of  
generating a UUID in a browser. CouchDB uses a crypto level RNG to  
create the UUIDs, which is pretty much mandatory to minimize spurious  
conflicts. But generating true UUIDs isn't possible in most browsers  
(the entropy of the browser PRNG generator is usually significantly  
below the 128 bits possible for a UUID).

One idea is CouchDB can generate the UUID still in a separate step. So  
the client first asks the server to generate a UUID, then the client  
uses that UUID to save the document. It's inefficient in that it would  
require two transactions, but it can make this more efficient if the  
client library (e.g couch.js) pre-requests UUIDs in bulk, and then  
keeps the un-used ones around in memory until more are needed.

-Damien

Re: Need help debugging mochiweb/Safari HTTP problems

Posted by Randall Leeds <ra...@gmail.com>.
On Wed, Jul 23, 2008 at 3:01 PM, Damien Katz <da...@apache.org> wrote:

> document creation. The problem is the generated id for the document is a
> UUID generated server side, so the server has no way to distinguish if a
> request is a new request or a resend of an already processed request, and so
> generates another UUID and thus creates another new document. But if the
> UUID is generated by the client, then the resend will cause a conflict
> error, that UUID already exists in the DB, thus eliminating the duplicate
> data.
>

It seems to me the easiest solution is that the client should probably be
responsible for generating UUIDs.
Is there a counter-argument that indicates CouchDB being responsible for
this? The only one I come up with quickly is that it puts an extra burden on
the client. Not such a huge burden though. As far as the server goes, client
generation seems to adhere to the wonderful tenet K.I.S.S.

-R