You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Jeff Hinrichs - DM&T <je...@dundeemt.com> on 2009/02/21 11:48:03 UTC

[<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

While trying to upload a document with attachments -- a load operation

[Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] 'PUT' /eli/INV00541303
{1,1}
Headers: [{'Accept',"application/json"},
          {'Accept-Encoding',"identity"},
          {'Content-Length',"1806030"},
          {'Content-Type',"application/json"},
          {'Host',"localhost:5984"},
          {'User-Agent',"couchdb-python 0.5"}]

[Sat, 21 Feb 2009 10:41:26 GMT] [error] [<0.111.0>] Uncaught error in HTTP
request: {exit,{body_too_large,content_length}}

[Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] Stacktrace:
[{mochiweb_request,stream_body,5},
             {mochiweb_request,recv_body,2},
             {couch_httpd,json_body,1},
             {couch_httpd_db,db_doc_req,3},
             {couch_httpd_db,do_db_req,2},
             {couch_httpd,handle_request,3},
             {mochiweb_http,headers,4},
             {proc_lib,init_p,5}]

[Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] HTTPd 500 error
response:
 {"error":"body_too_large","reason":"content_length"}

[Sat, 21 Feb 2009 10:41:26 GMT] [info] [<0.111.0>] 127.0.0.1 - - 'PUT'
/eli/INV00541303 500


local.ini
=====================
[couchdb]
max_document_size = 17179869184 ; bytes
max_attachment_chunk_size = 17179869184 ; bytes


Any  insights or work-arounds would be appreciated.

Regards,

Jeff

Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Jeff Hinrichs - DM&T <je...@dundeemt.com>.
On Sat, Feb 21, 2009 at 12:45 PM, Chris Anderson <jc...@apache.org> wrote:

> On Sat, Feb 21, 2009 at 10:10 AM, Jeff Hinrichs - DM&T
> <je...@dundeemt.com> wrote:
> >
> > Google can't seem to help me locate an example of doing a PUT w/ chunked
> > transfer for  httplib2 -- does anyone have any pointers?
> >
>
> Any luck PUTing attachments with chunked encoding? You'll need to
> create the document first, with a PUT, and then create the attachment,
> with a chunked PUT.

No not yet, I have limited experience with http underpinnings so I am
looking for examples on the interweb.  Creating the document and then
putting the attachments would create an 'unwanted'?? revision during a
dump/load cycle -- I am not sure if that is a bad thing or not -- since I
was hoping to get it loaded back into the db in one shot.

>
> It should be possible to avoid non-buffered non-chunked standalone
> attachment PUTs now as well, based on my recent patch to Mochiweb.
> Implementing that would be pretty simple, just a matter of adapting
> the attachment writing code to handle it.

I haven't experienced any difficulties with put_attachment() and some of my
attachments are over 8MB.  Still trying to figure out how put_attachment()
which uses self.request.put() and the normal __setitem__  which also uses
self.request.put() method differ since put_attachment() can deal with
mochiweb's default limit of 1MB while __setitem__ can not.

>
>
> There's not much that can be done about memory usage for inlined
> base64 attachments. It sounds like that is what you are using for
> load.

Yep, I am trying to modify the existing code for couchdb-dump/load to output
each as an individual mime object then I am marshalling the individual
objects into a zip file -- that part is working.  The problem is when I try
to db[docid]=doc a doc that exceeds mochiweb's body limit.

>
> In the long run, I'm hoping that a form of the all_docs view can be
> useful for dump and load. Eg, something like
>
> curl http://localhost:5984/olddb/_all_docs?include_everything=true >
> backup.json
>
> followed by
>
> curl -X POST -T backup.json http://localhost:5984/newdb/_bulk_docs
>
> Would be all that is needed for dump and load. I'm not sure how close
> we are to this yet, but it sure seems like the simplest way.

That would be outstanding.  I can't +1 it enough.  I would suggest that it
should be available before the next release that causes a dump/reload cycle
due to changes in rev or other db incompatible changes.  Kind of surprised
that there isn't something like this already.  Otherwise there is no way to
migrate an existing database with attachments of more than modest size
between machines -- replication of documents with large attachments fails.

As I am sure that they new _rev format is going to make it into .9 which is
going to require a dump/load.  Kind of a rock/hard place for me.  Currently,
it looks as if hacking mochiweb's default body limit is the only viable
solution until I can get chunked transfer working in the couchdb-python
module. ;(  yick.

I am confused on the settings in couchdb for max_document_size and
max_attachment_size -- those are in effect after mochiweb has received the
data, correct? so neither is a solution for the dilemma at hand.  Any
possible way to make src/mochiweb/mochiweb_request.erl >
-define(MAX_RECV_BODY, (1024*1024))  modifiable via futon's configuration
panel? -- I don't know if this is a good idea -- chances are someone has
thought hard about it and it is a good setting already.

hopefully this will interest cmlenz -- and he will be able to add to the
conversation ;)



>
> Chris
>
> --
> Chris Anderson
> http://jchris.mfdz.com
>

Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Chris Anderson <jc...@apache.org>.
On Sat, Feb 21, 2009 at 10:10 AM, Jeff Hinrichs - DM&T
<je...@dundeemt.com> wrote:
>
> Google can't seem to help me locate an example of doing a PUT w/ chunked
> transfer for  httplib2 -- does anyone have any pointers?
>

Any luck PUTing attachments with chunked encoding? You'll need to
create the document first, with a PUT, and then create the attachment,
with a chunked PUT.

It should be possible to avoid non-buffered non-chunked standalone
attachment PUTs now as well, based on my recent patch to Mochiweb.
Implementing that would be pretty simple, just a matter of adapting
the attachment writing code to handle it.

There's not much that can be done about memory usage for inlined
base64 attachments. It sounds like that is what you are using for
load.

In the long run, I'm hoping that a form of the all_docs view can be
useful for dump and load. Eg, something like

curl http://localhost:5984/olddb/_all_docs?include_everything=true > backup.json

followed by

curl -X POST -T backup.json http://localhost:5984/newdb/_bulk_docs

Would be all that is needed for dump and load. I'm not sure how close
we are to this yet, but it sure seems like the simplest way.

Chris

-- 
Chris Anderson
http://jchris.mfdz.com

Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Jeff Hinrichs - DM&T <je...@dundeemt.com>.
On Sat, Feb 21, 2009 at 5:49 AM, Jason Davies <ja...@jasondavies.com> wrote:

> Hi Jeff,
>
> On 21 Feb 2009, at 11:13, Jeff Hinrichs - DM&T wrote:
>
>  ended up modifying src/mochiweb/mochiweb_request.erl >
>> -define(MAX_RECV_BODY, (1024*1024)) to -define(MAX_RECV_BODY,
>> (1024*1024*16))
>>
>> Going forward, what is the best way to handle this situation -- w.r.t.
>> python-couchdb?
>>
>
>
> This has come up in a thread before with the solution being the same as
> what you have done: http://markmail.org/thread/tljsgr4g6eelgq7m
>
> MochiWeb's default request size is 1MB at the moment, but perhaps this
> doesn't matter if chunked transfer coding is used.  If this is the case, I
> think couchdb-python should be updated to use chunked transfer coding.  I
> would open an issue on the couchdb-python bug tracker if there isn't already
> one.
>
> --
> Jason Davies
>
> www.jasondavies.com
>
>
The thing I can seem to grok, is how the Database put_attachment() method
succeeds with attachments clearly larger than 1MB and it uses the
resource.put() method but the Database object's __setitem__ method also uses
the resource.put() method.  So wouldn't it imply that chunked is working for
one and not the other?  Clearly there is some magic involved that I am
currently blind to.

Google can't seem to help me locate an example of doing a PUT w/ chunked
transfer for  httplib2 -- does anyone have any pointers?

Regards,

Jeff

Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Jeff Hinrichs - DM&T <du...@gmail.com>.
On Sat, Feb 21, 2009 at 5:49 AM, Jason Davies <ja...@jasondavies.com> wrote:

> Hi Jeff,
>
> On 21 Feb 2009, at 11:13, Jeff Hinrichs - DM&T wrote:
>
>  ended up modifying src/mochiweb/mochiweb_request.erl >
>> -define(MAX_RECV_BODY, (1024*1024)) to -define(MAX_RECV_BODY,
>> (1024*1024*16))
>>
>> Going forward, what is the best way to handle this situation -- w.r.t.
>> python-couchdb?
>>
>
>
> This has come up in a thread before with the solution being the same as
> what you have done: http://markmail.org/thread/tljsgr4g6eelgq7m
>
> MochiWeb's default request size is 1MB at the moment, but perhaps this
> doesn't matter if chunked transfer coding is used.  If this is the case, I
> think couchdb-python should be updated to use chunked transfer coding.  I
> would open an issue on the couchdb-python bug tracker if there isn't already
> one.
>
> --
> Jason Davies
>
> www.jasondavies.com
>
>
The thing I can seem to grok, is how the Database put_attachment() method
succeeds with attachments clearly larger than 1MB and it uses the
resource.put() method but the Database object's __setitem__ method also uses
the resource.put() method.  So wouldn't it imply that chunked is working for
one and not the other?  Clearly there is some magic involved that I am
currently blind to.

Google can't seem to help me locate an example of doing a PUT w/ chunked
transfer for  httplib2 -- does anyone have any pointers?

Regards,

Jeff

Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Jason Davies <ja...@jasondavies.com>.
Hi Jeff,

On 21 Feb 2009, at 11:13, Jeff Hinrichs - DM&T wrote:

> ended up modifying src/mochiweb/mochiweb_request.erl >
> -define(MAX_RECV_BODY, (1024*1024)) to -define(MAX_RECV_BODY,
> (1024*1024*16))
>
> Going forward, what is the best way to handle this situation -- w.r.t.
> python-couchdb?


This has come up in a thread before with the solution being the same  
as what you have done: http://markmail.org/thread/tljsgr4g6eelgq7m

MochiWeb's default request size is 1MB at the moment, but perhaps this  
doesn't matter if chunked transfer coding is used.  If this is the  
case, I think couchdb-python should be updated to use chunked transfer  
coding.  I would open an issue on the couchdb-python bug tracker if  
there isn't already one.

--
Jason Davies

www.jasondavies.com


Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}

Posted by Jeff Hinrichs - DM&T <je...@dundeemt.com>.
On Sat, Feb 21, 2009 at 4:48 AM, Jeff Hinrichs - DM&T <je...@dundeemt.com>wrote:

> While trying to upload a document with attachments -- a load operation
>
> [Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] 'PUT' /eli/INV00541303
> {1,1}
> Headers: [{'Accept',"application/json"},
>           {'Accept-Encoding',"identity"},
>           {'Content-Length',"1806030"},
>           {'Content-Type',"application/json"},
>           {'Host',"localhost:5984"},
>           {'User-Agent',"couchdb-python 0.5"}]
>
> [Sat, 21 Feb 2009 10:41:26 GMT] [error] [<0.111.0>] Uncaught error in HTTP
> request: {exit,{body_too_large,content_length}}
>
> [Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] Stacktrace:
> [{mochiweb_request,stream_body,5},
>              {mochiweb_request,recv_body,2},
>              {couch_httpd,json_body,1},
>              {couch_httpd_db,db_doc_req,3},
>              {couch_httpd_db,do_db_req,2},
>              {couch_httpd,handle_request,3},
>              {mochiweb_http,headers,4},
>              {proc_lib,init_p,5}]
>
> [Sat, 21 Feb 2009 10:41:26 GMT] [debug] [<0.111.0>] HTTPd 500 error
> response:
>  {"error":"body_too_large","reason":"content_length"}
>
> [Sat, 21 Feb 2009 10:41:26 GMT] [info] [<0.111.0>] 127.0.0.1 - - 'PUT'
> /eli/INV00541303 500
>
>
> local.ini
> =====================
> [couchdb]
> max_document_size = 17179869184 ; bytes
> max_attachment_chunk_size = 17179869184 ; bytes
>
>
> Any  insights or work-arounds would be appreciated.
>
> Regards,
>
> Jeff
>
ended up modifying src/mochiweb/mochiweb_request.erl >
-define(MAX_RECV_BODY, (1024*1024)) to -define(MAX_RECV_BODY,
(1024*1024*16))

Going forward, what is the best way to handle this situation -- w.r.t.
python-couchdb?

Regards,

Jeff