You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Raja <ra...@gmail.com> on 2018/02/07 18:40:07 UTC

CouchDB 1.6.1 returning empty reply

Hi
We are trying to put an nginx (or haproxy) in front of a CouchDB server to
see if we can load balance some of our databases between multiple machines.
We are currently on 1.6.1 and cannot move upto 2.x to take advantage of the
newer features.

The problem is that the _changes urls are working pretty nice(through nginx
or haproxy) as long as the query string length is < 8192 bytes. We do have
some filtered replications that take the UUIDs as query parameters, and if
they are exceeding 8192 bytes, then we get a "no reply from server" in the
case of HAProxy and a "Connection reset by peer" in the case of Nginx
fronting CouchDB.

The format of the query is something like :

curl -vvvv -XGET
'http://username:password@url:5984/<database>/_changes?feed=normal&heartbeat=300000&style=all_docs&filter=filtername&docIds=<list
of ids>

Sometimes, we do have a lot of ids in that it exceeds the limit of 8192 and
when we try to limit it, it returns the values properly, but if we go
beyond the 8192 limit, it seems to be truncated and gives an error.

Please note that none of these happen if we go directly to CouchDB. This is
only a problem if we go through Nginx or HAProxy. The nginx config is as
mentioned here (
https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy) and
HAProxy is quite straightforward where all requests to the frontend are
sent to a couchdb server.

Also, we cannot use POST for the _changes as there is a issue with
filterParameters expected to be in the URL even if its POST (
https://github.com/couchbase/couchbase-lite-ios/issues/1139).

Any suggestions/workaround to solve this will be greatly helpful.

Thanks
Raja

 --
Raja
rajasaur at gmail.com

Re: Views questions in 2.1.1

Posted by "Narepalepu, Vimal Abhishek" <VN...@mgh.harvard.edu>.
Hi Dr. Helmer,

Let me check and get back to you on this.

Best,
Vimal
________________________________
From: Karl Helmer <he...@nmr.mgh.harvard.edu>
Sent: Wednesday, February 7, 2018 2:29:45 PM
To: user@couchdb.apache.org
Subject: Views questions in 2.1.1

Hi,

   I'm switching over to 2.1.1 from 1.7.1 for a new project. Is there a
way to do a bulk ingest of views from one database to another?  Also,
where do views live in the file system?

Also, in 1.7.1 when you created a view you were prompted for Design
Document: /design/<name> and View Name: <view-name>. But in 2.1.1, it asks
for "Design Document and Index Name.  Is Index Name = View Name?

thanks,
Karl

--
Karl Helmer, PhD
Athinoula A Martinos Center for Biomedical Imaging
Massachusetts General Hospital
149 - 13th St Room 2301
Charlestown, MA 02129
(p) 617.726.8636
(f) 617.726.7422
helmer@nmr.mgh.harvard.edu
http://www.martinos.org/user/6787




The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: Views questions in 2.1.1

Posted by Karl Helmer <he...@nmr.mgh.harvard.edu>.
Thanks Sebastian.  What I was hoping to get was a renamed database with
just the views in it, so it looks as though I can do this through
replication (Thanks Jan!), changing the name of the target and then just
deleting all of the documents.  Then rebuild the views after the new data
goes in.

thanks,
Karl


> Hi Karl,
>
> I'm unsure about the 2nd q - but on the first part: I'm pretty certain you
> have to effectively re-index / re-build the view. Creating the same design
> document on a clustered database should produce a similar view though.
> This
> is what gets integration-tested and I'm very confident this is the
> intention as well.
>
> I hope this helps at least a little...
>
> Best
>    Sebastian
>
>
>
> On Wed, Feb 7, 2018 at 8:29 PM, Karl Helmer <he...@nmr.mgh.harvard.edu>
> wrote:
>
>> Hi,
>>
>>    I'm switching over to 2.1.1 from 1.7.1 for a new project. Is there a
>> way to do a bulk ingest of views from one database to another?  Also,
>> where do views live in the file system?
>>
>> Also, in 1.7.1 when you created a view you were prompted for Design
>> Document: /design/<name> and View Name: <view-name>. But in 2.1.1, it
>> asks
>> for "Design Document and Index Name.  Is Index Name = View Name?
>>
>> thanks,
>> Karl
>>
>> --
>> Karl Helmer, PhD
>> Athinoula A Martinos Center for Biomedical Imaging
>> Massachusetts General Hospital
>> 149 - 13th St Room 2301
>> Charlestown, MA 02129
>> (p) 617.726.8636
>> (f) 617.726.7422
>> helmer@nmr.mgh.harvard.edu
>> http://www.martinos.org/user/6787
>>
>>
>>
>>
>> The information in this e-mail is intended only for the person to whom
>> it
>> is
>> addressed. If you believe this e-mail was sent to you in error and the
>> e-mail
>> contains patient information, please contact the Partners Compliance
>> HelpLine at
>> http://www.partners.org/complianceline . If the e-mail was sent to you
>> in
>> error
>> but does not contain patient information, please contact the sender and
>> properly
>> dispose of the e-mail.
>>
>>
>


-- 
Karl Helmer, PhD
Athinoula A Martinos Center for Biomedical Imaging
Massachusetts General Hospital
149 - 13th St Room 2301
Charlestown, MA 02129
(p) 617.726.8636
(f) 617.726.7422
helmer@nmr.mgh.harvard.edu
http://www.martinos.org/user/6787



Re: Views questions in 2.1.1

Posted by Sebastian Rothbucher <se...@googlemail.com>.
Hi Karl,

I'm unsure about the 2nd q - but on the first part: I'm pretty certain you
have to effectively re-index / re-build the view. Creating the same design
document on a clustered database should produce a similar view though. This
is what gets integration-tested and I'm very confident this is the
intention as well.

I hope this helps at least a little...

Best
   Sebastian



On Wed, Feb 7, 2018 at 8:29 PM, Karl Helmer <he...@nmr.mgh.harvard.edu>
wrote:

> Hi,
>
>    I'm switching over to 2.1.1 from 1.7.1 for a new project. Is there a
> way to do a bulk ingest of views from one database to another?  Also,
> where do views live in the file system?
>
> Also, in 1.7.1 when you created a view you were prompted for Design
> Document: /design/<name> and View Name: <view-name>. But in 2.1.1, it asks
> for "Design Document and Index Name.  Is Index Name = View Name?
>
> thanks,
> Karl
>
> --
> Karl Helmer, PhD
> Athinoula A Martinos Center for Biomedical Imaging
> Massachusetts General Hospital
> 149 - 13th St Room 2301
> Charlestown, MA 02129
> (p) 617.726.8636
> (f) 617.726.7422
> helmer@nmr.mgh.harvard.edu
> http://www.martinos.org/user/6787
>
>
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>

Views questions in 2.1.1

Posted by Karl Helmer <he...@nmr.mgh.harvard.edu>.
Hi,

   I'm switching over to 2.1.1 from 1.7.1 for a new project. Is there a
way to do a bulk ingest of views from one database to another?  Also,
where do views live in the file system?

Also, in 1.7.1 when you created a view you were prompted for Design
Document: /design/<name> and View Name: <view-name>. But in 2.1.1, it asks
for "Design Document and Index Name.  Is Index Name = View Name?

thanks,
Karl

-- 
Karl Helmer, PhD
Athinoula A Martinos Center for Biomedical Imaging
Massachusetts General Hospital
149 - 13th St Room 2301
Charlestown, MA 02129
(p) 617.726.8636
(f) 617.726.7422
helmer@nmr.mgh.harvard.edu
http://www.martinos.org/user/6787




The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: CouchDB 1.6.1 returning empty reply

Posted by Raja <ra...@gmail.com>.
Got it working by modifying mochiweb_http.erl. For some reason, the recbuf
was not setup in the socket options when receiving the request. I had it
setup in local.ini and thought it would be propagated, but in
mochiweb_http.erl's request(Socket,Body) method, it is setting up the
socketoptions like


ok = mochiweb_socket:setopts(Socket, [{active, once}]),

It looks like its not setting any other socket options, which is probably
why the request buffer was cut off at 8k bytes and the response was
terminated abruptly. I changed the above to

ok = mochiweb_socket:setopts(Socket, [{active, once},{recbuf, 60000}]),

Once I included recbuf, my original request, which had a huge query string
worked properly. For now, its hardcoded and Ill have to make it fetch from
the local.ini, but atleast the request is now working properly through
nginx / haproxy

Thanks
Raja



On Thu, Feb 8, 2018 at 11:16 AM, Raja <ra...@gmail.com> wrote:

> Hi Nick
> We have those headers setup for nginx and haproxy uses tune.maxrewrite and
> tune.bufsize. Both of them have been setup to be 32k.
>
> Also, to ensure that the problem is not with nginx or haproxy, I setup
> netcat on the box listening on port 5984 and proxied to it, which worked
> just fine. So it seems to be something to do with couchdb/erlang buffer
> lengths.
>
> Ill post my findings after adding some debug logs on mochiweb to see if we
> can resolve this.
>
> Thanks
> Raja
>
>
> On Thu, Feb 8, 2018 at 10:28 AM, Nick Vatamaniuc <va...@gmail.com>
> wrote:
>
>> Hi Raja,
>>
>> It seems that nginx or haproxy has limits on request line lengths.
>>
>> Take a look at:
>> http://nginx.org/en/docs/http/ngx_http_core_module.html#clie
>> nt_header_buffer_size
>> and
>> http://nginx.org/en/docs/http/ngx_http_core_module.html#clie
>> nt_header_buffer_size
>> for nginx. Not sure which settings apply to haproxy.
>>
>> Also consider that in CouchDB 2.x you also have an option of filtering
>> replication docs using selector objects instead of filter functions. Those
>> requests would then be sent using POST requests.
>>
>> http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
>> also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A
>> New
>> Way to Filter" section).
>>
>> Regards,
>> -Nick
>>
>>
>> On Wed, Feb 7, 2018 at 11:01 PM, Raja <ra...@gmail.com> wrote:
>>
>> > Thanks Nick. I did try setting recbuf earlier, but dint get much luck.
>> This
>> > is what I had:
>> >
>> > socket_options = [{recbuf,64000},{nodelay,true}]
>> >
>> > I set the same in the [replicator] section as well to see if _changes
>> would
>> > pickup the values in [replicator] section. This is how the [replicator]
>> > socket_options looks like
>> >
>> > socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
>> >
>> > I have very little experience with erlang other than the replication
>> > filters that I have written, but can we enable any debugging to see the
>> > http errors that it may be throwing. We do build couchdb from source, so
>> > not sure if we can change any parameters to allow for increased buffer
>> > size.
>> >
>> > I see in mochiweb_http.erl (line 69 for request body handling and line
>> 102
>> > for header handling) that if the size is > emsgsize, the code is like
>> >
>> > >            % R15B02 returns this then closes the socket, so close and
>> > exit
>> > >             mochiweb_socket:close(Socket),
>> >
>> >
>> > but Im not sure why it wouldnt pick up the socket_options which has
>> recbuf
>> > set to 64k. Ill keep debugging (build again from source and see what
>> value
>> > of recbuf its taking and logging it), but if you have any other
>> pointers,
>> > please let know.
>> >
>> > Thanks again
>> > Raja
>> >
>> >
>> > On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc <va...@gmail.com>
>> > wrote:
>> >
>> > > Hi Raja,
>> > >
>> > > This sounds like this issue:
>> > > https://issues.apache.org/jira/browse/COUCHDB-3293
>> > >
>> > > It stems from a bug in http parser
>> > > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html
>> and
>> > > perhaps mochiweb not knowing how to handle a "message too large
>> error".
>> > >
>> > > One way to work around it is to increase the recbuf, say something
>> like
>> > > this (in 2.0):
>> > >
>> > > [chttpd]
>> > > server_options = [{recbuf, 65536}]
>> > >
>> > > In 1.6 the corresponding option I think is in httpd section:
>> > >
>> > > [httpd]
>> > > socket_options = ...
>> > >
>> > > See if that helps at all.
>> > >
>> > > And btw, that was the reason for introducing these two configuration
>> > > parameters:
>> > >
>> > > couchdb.max_document_id_length = infinity | Integer
>> > >
>> > > replicator.max_document_id_length = infinity | Integer
>> > >
>> > > Basically allowing another way to "avoid" the bug by limiting the
>> size of
>> > > document ids accepted in the system.
>> > >
>> > > Also it seems the behavior in mochiweb was fixed as well to send 413
>> as
>> > > opposed to timing out or closing teh connection as before. But the
>> > problem
>> > > with the Erlang http parser might still be there:
>> > >
>> > > https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be6
>> 8cd1f85a87c
>> > > e3028da07a
>> > >
>> > > Cheers,
>> > > -Nick
>> > >
>> > >
>> > > On Wed, Feb 7, 2018 at 1:40 PM, Raja <ra...@gmail.com> wrote:
>> > >
>> > > > Hi
>> > > > We are trying to put an nginx (or haproxy) in front of a CouchDB
>> server
>> > > to
>> > > > see if we can load balance some of our databases between multiple
>> > > machines.
>> > > > We are currently on 1.6.1 and cannot move upto 2.x to take
>> advantage of
>> > > the
>> > > > newer features.
>> > > >
>> > > > The problem is that the _changes urls are working pretty
>> nice(through
>> > > nginx
>> > > > or haproxy) as long as the query string length is < 8192 bytes. We
>> do
>> > > have
>> > > > some filtered replications that take the UUIDs as query parameters,
>> and
>> > > if
>> > > > they are exceeding 8192 bytes, then we get a "no reply from server"
>> in
>> > > the
>> > > > case of HAProxy and a "Connection reset by peer" in the case of
>> Nginx
>> > > > fronting CouchDB.
>> > > >
>> > > > The format of the query is something like :
>> > > >
>> > > > curl -vvvv -XGET
>> > > > 'http://username:password@url:5984/<database>/_changes?feed=
>> > > > normal&heartbeat=300000&style=all_docs&filter=filtername&doc
>> Ids=<list
>> > > > of ids>
>> > > >
>> > > > Sometimes, we do have a lot of ids in that it exceeds the limit of
>> 8192
>> > > and
>> > > > when we try to limit it, it returns the values properly, but if we
>> go
>> > > > beyond the 8192 limit, it seems to be truncated and gives an error.
>> > > >
>> > > > Please note that none of these happen if we go directly to CouchDB.
>> > This
>> > > is
>> > > > only a problem if we go through Nginx or HAProxy. The nginx config
>> is
>> > as
>> > > > mentioned here (
>> > > > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+
>> as+a+proxy)
>> > > and
>> > > > HAProxy is quite straightforward where all requests to the frontend
>> are
>> > > > sent to a couchdb server.
>> > > >
>> > > > Also, we cannot use POST for the _changes as there is a issue with
>> > > > filterParameters expected to be in the URL even if its POST (
>> > > > https://github.com/couchbase/couchbase-lite-ios/issues/1139).
>> > > >
>> > > > Any suggestions/workaround to solve this will be greatly helpful.
>> > > >
>> > > > Thanks
>> > > > Raja
>> > > >
>> > > >  --
>> > > > Raja
>> > > > rajasaur at gmail.com
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > Raja
>> > rajasaur at gmail.com
>> >
>>
>
>
>
> --
> Raja
> rajasaur at gmail.com
>



-- 
Raja
rajasaur at gmail.com

Re: CouchDB 1.6.1 returning empty reply

Posted by Raja <ra...@gmail.com>.
Hi Nick
We have those headers setup for nginx and haproxy uses tune.maxrewrite and
tune.bufsize. Both of them have been setup to be 32k.

Also, to ensure that the problem is not with nginx or haproxy, I setup
netcat on the box listening on port 5984 and proxied to it, which worked
just fine. So it seems to be something to do with couchdb/erlang buffer
lengths.

Ill post my findings after adding some debug logs on mochiweb to see if we
can resolve this.

Thanks
Raja


On Thu, Feb 8, 2018 at 10:28 AM, Nick Vatamaniuc <va...@gmail.com> wrote:

> Hi Raja,
>
> It seems that nginx or haproxy has limits on request line lengths.
>
> Take a look at:
> http://nginx.org/en/docs/http/ngx_http_core_module.html#
> client_header_buffer_size
> and
> http://nginx.org/en/docs/http/ngx_http_core_module.html#
> client_header_buffer_size
> for nginx. Not sure which settings apply to haproxy.
>
> Also consider that in CouchDB 2.x you also have an option of filtering
> replication docs using selector objects instead of filter functions. Those
> requests would then be sent using POST requests.
>
> http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
> also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A
> New
> Way to Filter" section).
>
> Regards,
> -Nick
>
>
> On Wed, Feb 7, 2018 at 11:01 PM, Raja <ra...@gmail.com> wrote:
>
> > Thanks Nick. I did try setting recbuf earlier, but dint get much luck.
> This
> > is what I had:
> >
> > socket_options = [{recbuf,64000},{nodelay,true}]
> >
> > I set the same in the [replicator] section as well to see if _changes
> would
> > pickup the values in [replicator] section. This is how the [replicator]
> > socket_options looks like
> >
> > socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
> >
> > I have very little experience with erlang other than the replication
> > filters that I have written, but can we enable any debugging to see the
> > http errors that it may be throwing. We do build couchdb from source, so
> > not sure if we can change any parameters to allow for increased buffer
> > size.
> >
> > I see in mochiweb_http.erl (line 69 for request body handling and line
> 102
> > for header handling) that if the size is > emsgsize, the code is like
> >
> > >            % R15B02 returns this then closes the socket, so close and
> > exit
> > >             mochiweb_socket:close(Socket),
> >
> >
> > but Im not sure why it wouldnt pick up the socket_options which has
> recbuf
> > set to 64k. Ill keep debugging (build again from source and see what
> value
> > of recbuf its taking and logging it), but if you have any other pointers,
> > please let know.
> >
> > Thanks again
> > Raja
> >
> >
> > On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc <va...@gmail.com>
> > wrote:
> >
> > > Hi Raja,
> > >
> > > This sounds like this issue:
> > > https://issues.apache.org/jira/browse/COUCHDB-3293
> > >
> > > It stems from a bug in http parser
> > > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> > > perhaps mochiweb not knowing how to handle a "message too large error".
> > >
> > > One way to work around it is to increase the recbuf, say something like
> > > this (in 2.0):
> > >
> > > [chttpd]
> > > server_options = [{recbuf, 65536}]
> > >
> > > In 1.6 the corresponding option I think is in httpd section:
> > >
> > > [httpd]
> > > socket_options = ...
> > >
> > > See if that helps at all.
> > >
> > > And btw, that was the reason for introducing these two configuration
> > > parameters:
> > >
> > > couchdb.max_document_id_length = infinity | Integer
> > >
> > > replicator.max_document_id_length = infinity | Integer
> > >
> > > Basically allowing another way to "avoid" the bug by limiting the size
> of
> > > document ids accepted in the system.
> > >
> > > Also it seems the behavior in mochiweb was fixed as well to send 413 as
> > > opposed to timing out or closing teh connection as before. But the
> > problem
> > > with the Erlang http parser might still be there:
> > >
> > > https://github.com/mochi/mochiweb/commit/
> a6fdb9a3af1301c8be68cd1f85a87c
> > > e3028da07a
> > >
> > > Cheers,
> > > -Nick
> > >
> > >
> > > On Wed, Feb 7, 2018 at 1:40 PM, Raja <ra...@gmail.com> wrote:
> > >
> > > > Hi
> > > > We are trying to put an nginx (or haproxy) in front of a CouchDB
> server
> > > to
> > > > see if we can load balance some of our databases between multiple
> > > machines.
> > > > We are currently on 1.6.1 and cannot move upto 2.x to take advantage
> of
> > > the
> > > > newer features.
> > > >
> > > > The problem is that the _changes urls are working pretty nice(through
> > > nginx
> > > > or haproxy) as long as the query string length is < 8192 bytes. We do
> > > have
> > > > some filtered replications that take the UUIDs as query parameters,
> and
> > > if
> > > > they are exceeding 8192 bytes, then we get a "no reply from server"
> in
> > > the
> > > > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > > > fronting CouchDB.
> > > >
> > > > The format of the query is something like :
> > > >
> > > > curl -vvvv -XGET
> > > > 'http://username:password@url:5984/<database>/_changes?feed=
> > > > normal&heartbeat=300000&style=all_docs&filter=filtername&
> docIds=<list
> > > > of ids>
> > > >
> > > > Sometimes, we do have a lot of ids in that it exceeds the limit of
> 8192
> > > and
> > > > when we try to limit it, it returns the values properly, but if we go
> > > > beyond the 8192 limit, it seems to be truncated and gives an error.
> > > >
> > > > Please note that none of these happen if we go directly to CouchDB.
> > This
> > > is
> > > > only a problem if we go through Nginx or HAProxy. The nginx config is
> > as
> > > > mentioned here (
> > > > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy
> )
> > > and
> > > > HAProxy is quite straightforward where all requests to the frontend
> are
> > > > sent to a couchdb server.
> > > >
> > > > Also, we cannot use POST for the _changes as there is a issue with
> > > > filterParameters expected to be in the URL even if its POST (
> > > > https://github.com/couchbase/couchbase-lite-ios/issues/1139).
> > > >
> > > > Any suggestions/workaround to solve this will be greatly helpful.
> > > >
> > > > Thanks
> > > > Raja
> > > >
> > > >  --
> > > > Raja
> > > > rajasaur at gmail.com
> > > >
> > >
> >
> >
> >
> > --
> > Raja
> > rajasaur at gmail.com
> >
>



-- 
Raja
rajasaur at gmail.com

Re: CouchDB 1.6.1 returning empty reply

Posted by Nick Vatamaniuc <va...@gmail.com>.
Hi Raja,

It seems that nginx or haproxy has limits on request line lengths.

Take a look at:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
and
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
for nginx. Not sure which settings apply to haproxy.

Also consider that in CouchDB 2.x you also have an option of filtering
replication docs using selector objects instead of filter functions. Those
requests would then be sent using POST requests.

http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A New
Way to Filter" section).

Regards,
-Nick


On Wed, Feb 7, 2018 at 11:01 PM, Raja <ra...@gmail.com> wrote:

> Thanks Nick. I did try setting recbuf earlier, but dint get much luck. This
> is what I had:
>
> socket_options = [{recbuf,64000},{nodelay,true}]
>
> I set the same in the [replicator] section as well to see if _changes would
> pickup the values in [replicator] section. This is how the [replicator]
> socket_options looks like
>
> socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
>
> I have very little experience with erlang other than the replication
> filters that I have written, but can we enable any debugging to see the
> http errors that it may be throwing. We do build couchdb from source, so
> not sure if we can change any parameters to allow for increased buffer
> size.
>
> I see in mochiweb_http.erl (line 69 for request body handling and line 102
> for header handling) that if the size is > emsgsize, the code is like
>
> >            % R15B02 returns this then closes the socket, so close and
> exit
> >             mochiweb_socket:close(Socket),
>
>
> but Im not sure why it wouldnt pick up the socket_options which has recbuf
> set to 64k. Ill keep debugging (build again from source and see what value
> of recbuf its taking and logging it), but if you have any other pointers,
> please let know.
>
> Thanks again
> Raja
>
>
> On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc <va...@gmail.com>
> wrote:
>
> > Hi Raja,
> >
> > This sounds like this issue:
> > https://issues.apache.org/jira/browse/COUCHDB-3293
> >
> > It stems from a bug in http parser
> > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> > perhaps mochiweb not knowing how to handle a "message too large error".
> >
> > One way to work around it is to increase the recbuf, say something like
> > this (in 2.0):
> >
> > [chttpd]
> > server_options = [{recbuf, 65536}]
> >
> > In 1.6 the corresponding option I think is in httpd section:
> >
> > [httpd]
> > socket_options = ...
> >
> > See if that helps at all.
> >
> > And btw, that was the reason for introducing these two configuration
> > parameters:
> >
> > couchdb.max_document_id_length = infinity | Integer
> >
> > replicator.max_document_id_length = infinity | Integer
> >
> > Basically allowing another way to "avoid" the bug by limiting the size of
> > document ids accepted in the system.
> >
> > Also it seems the behavior in mochiweb was fixed as well to send 413 as
> > opposed to timing out or closing teh connection as before. But the
> problem
> > with the Erlang http parser might still be there:
> >
> > https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87c
> > e3028da07a
> >
> > Cheers,
> > -Nick
> >
> >
> > On Wed, Feb 7, 2018 at 1:40 PM, Raja <ra...@gmail.com> wrote:
> >
> > > Hi
> > > We are trying to put an nginx (or haproxy) in front of a CouchDB server
> > to
> > > see if we can load balance some of our databases between multiple
> > machines.
> > > We are currently on 1.6.1 and cannot move upto 2.x to take advantage of
> > the
> > > newer features.
> > >
> > > The problem is that the _changes urls are working pretty nice(through
> > nginx
> > > or haproxy) as long as the query string length is < 8192 bytes. We do
> > have
> > > some filtered replications that take the UUIDs as query parameters, and
> > if
> > > they are exceeding 8192 bytes, then we get a "no reply from server" in
> > the
> > > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > > fronting CouchDB.
> > >
> > > The format of the query is something like :
> > >
> > > curl -vvvv -XGET
> > > 'http://username:password@url:5984/<database>/_changes?feed=
> > > normal&heartbeat=300000&style=all_docs&filter=filtername&docIds=<list
> > > of ids>
> > >
> > > Sometimes, we do have a lot of ids in that it exceeds the limit of 8192
> > and
> > > when we try to limit it, it returns the values properly, but if we go
> > > beyond the 8192 limit, it seems to be truncated and gives an error.
> > >
> > > Please note that none of these happen if we go directly to CouchDB.
> This
> > is
> > > only a problem if we go through Nginx or HAProxy. The nginx config is
> as
> > > mentioned here (
> > > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy)
> > and
> > > HAProxy is quite straightforward where all requests to the frontend are
> > > sent to a couchdb server.
> > >
> > > Also, we cannot use POST for the _changes as there is a issue with
> > > filterParameters expected to be in the URL even if its POST (
> > > https://github.com/couchbase/couchbase-lite-ios/issues/1139).
> > >
> > > Any suggestions/workaround to solve this will be greatly helpful.
> > >
> > > Thanks
> > > Raja
> > >
> > >  --
> > > Raja
> > > rajasaur at gmail.com
> > >
> >
>
>
>
> --
> Raja
> rajasaur at gmail.com
>

Re: CouchDB 1.6.1 returning empty reply

Posted by Raja <ra...@gmail.com>.
Thanks Nick. I did try setting recbuf earlier, but dint get much luck. This
is what I had:

socket_options = [{recbuf,64000},{nodelay,true}]

I set the same in the [replicator] section as well to see if _changes would
pickup the values in [replicator] section. This is how the [replicator]
socket_options looks like

socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]

I have very little experience with erlang other than the replication
filters that I have written, but can we enable any debugging to see the
http errors that it may be throwing. We do build couchdb from source, so
not sure if we can change any parameters to allow for increased buffer
size.

I see in mochiweb_http.erl (line 69 for request body handling and line 102
for header handling) that if the size is > emsgsize, the code is like

>            % R15B02 returns this then closes the socket, so close and exit
>             mochiweb_socket:close(Socket),


but Im not sure why it wouldnt pick up the socket_options which has recbuf
set to 64k. Ill keep debugging (build again from source and see what value
of recbuf its taking and logging it), but if you have any other pointers,
please let know.

Thanks again
Raja


On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc <va...@gmail.com> wrote:

> Hi Raja,
>
> This sounds like this issue:
> https://issues.apache.org/jira/browse/COUCHDB-3293
>
> It stems from a bug in http parser
> http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> perhaps mochiweb not knowing how to handle a "message too large error".
>
> One way to work around it is to increase the recbuf, say something like
> this (in 2.0):
>
> [chttpd]
> server_options = [{recbuf, 65536}]
>
> In 1.6 the corresponding option I think is in httpd section:
>
> [httpd]
> socket_options = ...
>
> See if that helps at all.
>
> And btw, that was the reason for introducing these two configuration
> parameters:
>
> couchdb.max_document_id_length = infinity | Integer
>
> replicator.max_document_id_length = infinity | Integer
>
> Basically allowing another way to "avoid" the bug by limiting the size of
> document ids accepted in the system.
>
> Also it seems the behavior in mochiweb was fixed as well to send 413 as
> opposed to timing out or closing teh connection as before. But the problem
> with the Erlang http parser might still be there:
>
> https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87c
> e3028da07a
>
> Cheers,
> -Nick
>
>
> On Wed, Feb 7, 2018 at 1:40 PM, Raja <ra...@gmail.com> wrote:
>
> > Hi
> > We are trying to put an nginx (or haproxy) in front of a CouchDB server
> to
> > see if we can load balance some of our databases between multiple
> machines.
> > We are currently on 1.6.1 and cannot move upto 2.x to take advantage of
> the
> > newer features.
> >
> > The problem is that the _changes urls are working pretty nice(through
> nginx
> > or haproxy) as long as the query string length is < 8192 bytes. We do
> have
> > some filtered replications that take the UUIDs as query parameters, and
> if
> > they are exceeding 8192 bytes, then we get a "no reply from server" in
> the
> > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > fronting CouchDB.
> >
> > The format of the query is something like :
> >
> > curl -vvvv -XGET
> > 'http://username:password@url:5984/<database>/_changes?feed=
> > normal&heartbeat=300000&style=all_docs&filter=filtername&docIds=<list
> > of ids>
> >
> > Sometimes, we do have a lot of ids in that it exceeds the limit of 8192
> and
> > when we try to limit it, it returns the values properly, but if we go
> > beyond the 8192 limit, it seems to be truncated and gives an error.
> >
> > Please note that none of these happen if we go directly to CouchDB. This
> is
> > only a problem if we go through Nginx or HAProxy. The nginx config is as
> > mentioned here (
> > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy)
> and
> > HAProxy is quite straightforward where all requests to the frontend are
> > sent to a couchdb server.
> >
> > Also, we cannot use POST for the _changes as there is a issue with
> > filterParameters expected to be in the URL even if its POST (
> > https://github.com/couchbase/couchbase-lite-ios/issues/1139).
> >
> > Any suggestions/workaround to solve this will be greatly helpful.
> >
> > Thanks
> > Raja
> >
> >  --
> > Raja
> > rajasaur at gmail.com
> >
>



-- 
Raja
rajasaur at gmail.com

Re: CouchDB 1.6.1 returning empty reply

Posted by Nick Vatamaniuc <va...@gmail.com>.
Hi Raja,

This sounds like this issue:
https://issues.apache.org/jira/browse/COUCHDB-3293

It stems from a bug in http parser
http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
perhaps mochiweb not knowing how to handle a "message too large error".

One way to work around it is to increase the recbuf, say something like
this (in 2.0):

[chttpd]
server_options = [{recbuf, 65536}]

In 1.6 the corresponding option I think is in httpd section:

[httpd]
socket_options = ...

See if that helps at all.

And btw, that was the reason for introducing these two configuration
parameters:

couchdb.max_document_id_length = infinity | Integer

replicator.max_document_id_length = infinity | Integer

Basically allowing another way to "avoid" the bug by limiting the size of
document ids accepted in the system.

Also it seems the behavior in mochiweb was fixed as well to send 413 as
opposed to timing out or closing teh connection as before. But the problem
with the Erlang http parser might still be there:

https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87ce3028da07a

Cheers,
-Nick


On Wed, Feb 7, 2018 at 1:40 PM, Raja <ra...@gmail.com> wrote:

> Hi
> We are trying to put an nginx (or haproxy) in front of a CouchDB server to
> see if we can load balance some of our databases between multiple machines.
> We are currently on 1.6.1 and cannot move upto 2.x to take advantage of the
> newer features.
>
> The problem is that the _changes urls are working pretty nice(through nginx
> or haproxy) as long as the query string length is < 8192 bytes. We do have
> some filtered replications that take the UUIDs as query parameters, and if
> they are exceeding 8192 bytes, then we get a "no reply from server" in the
> case of HAProxy and a "Connection reset by peer" in the case of Nginx
> fronting CouchDB.
>
> The format of the query is something like :
>
> curl -vvvv -XGET
> 'http://username:password@url:5984/<database>/_changes?feed=
> normal&heartbeat=300000&style=all_docs&filter=filtername&docIds=<list
> of ids>
>
> Sometimes, we do have a lot of ids in that it exceeds the limit of 8192 and
> when we try to limit it, it returns the values properly, but if we go
> beyond the 8192 limit, it seems to be truncated and gives an error.
>
> Please note that none of these happen if we go directly to CouchDB. This is
> only a problem if we go through Nginx or HAProxy. The nginx config is as
> mentioned here (
> https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy) and
> HAProxy is quite straightforward where all requests to the frontend are
> sent to a couchdb server.
>
> Also, we cannot use POST for the _changes as there is a issue with
> filterParameters expected to be in the URL even if its POST (
> https://github.com/couchbase/couchbase-lite-ios/issues/1139).
>
> Any suggestions/workaround to solve this will be greatly helpful.
>
> Thanks
> Raja
>
>  --
> Raja
> rajasaur at gmail.com
>