You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Antoine Blanchet (JIRA)" <ji...@apache.org> on 2015/05/18 18:04:00 UTC

[jira] [Created] (CASSANDRA-9413) Add a default limit size (in bytes) for requests

Antoine Blanchet created CASSANDRA-9413:
-------------------------------------------

             Summary: Add a default limit size (in bytes) for requests
                 Key: CASSANDRA-9413
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9413
             Project: Cassandra
          Issue Type: Improvement
          Components: Core
         Environment: Cassandra 2.0.10, requested using Thrift
            Reporter: Antoine Blanchet


We experienced a crash on our production cluster following a massive wide row read using Thrift

A client tried to read a wide row (~4GB of raw data) without specifying any
slice condition, which resulted in the crash of multiple nodes (as many as
the replication factor) after long garbage collections.
We know that wide rows should not be that big, but it is not the topic here.

My question is the following: Is it possible to prevent Cassandra from
OOM'ing when a client does this kind of requests? I'd rather have an error
thrown to the client than a multi-server crash.

The issue has already been discussed on the user mailing list, the thread is here : https://www.mail-archive.com/user@cassandra.apache.org/msg42340.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)