You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by Wordit <wo...@gmail.com> on 2012/09/06 18:28:36 UTC

Limiting doc size to prevent malicious use

How do you best limit the size of docs? The default setting is 4GB
(and currently I can't change max_document_size on iriscouch.com). But
maybe that's the wrong place anyway?

I want to prevent malicious persons from dumping huge data amounts
into the couch. Should it be done in an update function? If so I'm not
sure how to do that for the whole doc size.

Thanks for any help,

Marcus

Re: Limiting doc size to prevent malicious use

Posted by Robert Newson <rn...@apache.org>.
Cloudant has a 64MB limit and we can control connection count on a per user basis. Lowering max_document_size is a pretty good first step, but obviously not the only hardening mechanism that should be employed for production use. This applies much wider than CouchDB, of course.

B.

On 6 Sep 2012, at 21:12, Dan Santner wrote:

> Another vote for doing this at the web server where it belongs, closest to the edge of your network.  If it gets all the way into an app server of any sort it's usually already burned through your routers etc...
> 
> Now that doesn't help someone who is on iris or cloudant but I suppose it would be in those companies best interest to provide some sort of mechanism to throttle incoming post/put.  I suppose they can charge you higher usage for the weakness but in the end it's never a good thing to allow someone to go firehosing your sockets (intentional or not). 
> 
> Sorry for the soapbox on my first post to this group, but real life production questions like these are the main reason I can't run just a couch app and be done with it.  I treat couch like a database in a three tier model.  It's brilliant for just that purpose alone.
> 
> On Sep 6, 2012, at 2:31 PM, Mark Hahn wrote:
> 
>> I am.  I couldn't live without nginx.  (And node and couchdb).
>> 
>> On Thu, Sep 6, 2012 at 12:27 PM, Dave Cottlehuber <dc...@jsonified.com> wrote:
>> 
>>> On 6 September 2012 20:50, Robert Newson <rn...@apache.org> wrote:
>>>> function(doc) {
>>>> if (JSON.stringify(doc).length > limit) {
>>>>   throw({forbidden : "doc too big"
>>>> }
>>>> }
>>>> 
>>>> With the caveat that this is inefficient and horrible.
>>>> 
>>>> B.
>>> 
>>> And from a network-based (D)DOS, the damage is already done because it
>>> was sent & parsed muahahaha. But at least you'll not be storing that
>>> in the DB.
>>> 
>>> Has anybody using nginx or apache to enforce a hard limit? e.g.
>>> http://wiki.nginx.org/HttpCoreModule#client_max_body_size
>>> 
>>> A+
>>> Dave
>>> 
> 


Re: Limiting doc size to prevent malicious use

Posted by Dan Santner <da...@me.com>.
Another vote for doing this at the web server where it belongs, closest to the edge of your network.  If it gets all the way into an app server of any sort it's usually already burned through your routers etc...

Now that doesn't help someone who is on iris or cloudant but I suppose it would be in those companies best interest to provide some sort of mechanism to throttle incoming post/put.  I suppose they can charge you higher usage for the weakness but in the end it's never a good thing to allow someone to go firehosing your sockets (intentional or not). 

Sorry for the soapbox on my first post to this group, but real life production questions like these are the main reason I can't run just a couch app and be done with it.  I treat couch like a database in a three tier model.  It's brilliant for just that purpose alone.

On Sep 6, 2012, at 2:31 PM, Mark Hahn wrote:

> I am.  I couldn't live without nginx.  (And node and couchdb).
> 
> On Thu, Sep 6, 2012 at 12:27 PM, Dave Cottlehuber <dc...@jsonified.com> wrote:
> 
>> On 6 September 2012 20:50, Robert Newson <rn...@apache.org> wrote:
>>> function(doc) {
>>>  if (JSON.stringify(doc).length > limit) {
>>>    throw({forbidden : "doc too big"
>>>  }
>>> }
>>> 
>>> With the caveat that this is inefficient and horrible.
>>> 
>>> B.
>> 
>> And from a network-based (D)DOS, the damage is already done because it
>> was sent & parsed muahahaha. But at least you'll not be storing that
>> in the DB.
>> 
>> Has anybody using nginx or apache to enforce a hard limit? e.g.
>> http://wiki.nginx.org/HttpCoreModule#client_max_body_size
>> 
>> A+
>> Dave
>> 


Re: Limiting doc size to prevent malicious use

Posted by Mark Hahn <ma...@hahnca.com>.
I am.  I couldn't live without nginx.  (And node and couchdb).

On Thu, Sep 6, 2012 at 12:27 PM, Dave Cottlehuber <dc...@jsonified.com> wrote:

> On 6 September 2012 20:50, Robert Newson <rn...@apache.org> wrote:
> > function(doc) {
> >   if (JSON.stringify(doc).length > limit) {
> >     throw({forbidden : "doc too big"
> >   }
> > }
> >
> > With the caveat that this is inefficient and horrible.
> >
> > B.
>
> And from a network-based (D)DOS, the damage is already done because it
> was sent & parsed muahahaha. But at least you'll not be storing that
> in the DB.
>
> Has anybody using nginx or apache to enforce a hard limit? e.g.
> http://wiki.nginx.org/HttpCoreModule#client_max_body_size
>
> A+
> Dave
>

Re: Limiting doc size to prevent malicious use

Posted by Dave Cottlehuber <dc...@jsonified.com>.
On 6 September 2012 20:50, Robert Newson <rn...@apache.org> wrote:
> function(doc) {
>   if (JSON.stringify(doc).length > limit) {
>     throw({forbidden : "doc too big"
>   }
> }
>
> With the caveat that this is inefficient and horrible.
>
> B.

And from a network-based (D)DOS, the damage is already done because it
was sent & parsed muahahaha. But at least you'll not be storing that
in the DB.

Has anybody using nginx or apache to enforce a hard limit? e.g.
http://wiki.nginx.org/HttpCoreModule#client_max_body_size

A+
Dave

Re: Limiting doc size to prevent malicious use

Posted by Robert Newson <rn...@apache.org>.
function(doc) {
  if (JSON.stringify(doc).length > limit) {
    throw({forbidden : "doc too big"
  }
}

With the caveat that this is inefficient and horrible.

B.

On 6 Sep 2012, at 18:50, Wordit wrote:

> On Thu, Sep 6, 2012 at 7:35 PM, Robert Newson <rn...@apache.org> wrote:
>> 
>> validate_doc_update is your only other option. It won't stop the attempt, though, but at least you can reject the write itself.
> 
> Thanks, I've been wondering how to achieve this. I can test the size
> of each field, but a malicious user can create a new field to dump the
> data in, right?
> 
> A require function assures certain fields exist, but can you limit the
> fields to specific names? That way, you know which fields to check the
> string lengths of.
> 
> Thanks,
> 
> Marcus


Re: Limiting doc size to prevent malicious use

Posted by Wordit <wo...@gmail.com>.
On Thu, Sep 6, 2012 at 7:35 PM, Robert Newson <rn...@apache.org> wrote:
>
> validate_doc_update is your only other option. It won't stop the attempt, though, but at least you can reject the write itself.

Thanks, I've been wondering how to achieve this. I can test the size
of each field, but a malicious user can create a new field to dump the
data in, right?

A require function assures certain fields exist, but can you limit the
fields to specific names? That way, you know which fields to check the
string lengths of.

Thanks,

Marcus

Re: Limiting doc size to prevent malicious use

Posted by Robert Newson <rn...@apache.org>.
validate_doc_update is your only other option. It won't stop the attempt, though, but at least you can reject the write itself.

Perhaps Jason can be persuaded that sending a 4GB JSON body, that will be decoded internally to a form occupying around twice that, is a feature worth disabling.

Finally, I'll note that max_document_size was broken a while back (and no one noticed) and will be restored in CouchDB 1.3.

B.

On 6 Sep 2012, at 17:28, Wordit wrote:

> How do you best limit the size of docs? The default setting is 4GB
> (and currently I can't change max_document_size on iriscouch.com). But
> maybe that's the wrong place anyway?
> 
> I want to prevent malicious persons from dumping huge data amounts
> into the couch. Should it be done in an update function? If so I'm not
> sure how to do that for the whole doc size.
> 
> Thanks for any help,
> 
> Marcus