You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Shaun Cutts <sh...@cuttshome.net> on 2011/03/01 22:36:15 UTC

Re: limit on rows in a cf

This isn't quite true, I think. RandomPartitioner uses MD5. So if you had 10^16 rows, you would have a 10^-6 chance of a collision, according to http://en.wikipedia.org/wiki/Birthday_attack ... and apparently MD5 isn't quite balanced, so your actual odds of a collision are worse (though I'm not familiar with the literature).

10^16 is very large... but conceivable, I guess.

-- Shaun


On Feb 16, 2011, at 4:05 AM, Sylvain Lebresne wrote:

> Sky is the limit.
> 
> Columns in a row are limited to 2 billion because the size of a row is recorded in a java int. A row must also fit on one node, so this also limit in a way the size of a row (if you have large values, you could be limited by this factor much before reaching 2 billions columns).
> 
> The number of rows is never recorded anywhere (no data type limit). And rows are balanced over the cluster. So there is no real limit outside what your cluster can handle (that is the number of machine you can afford is probably the limit).
> 
> Now, if a single node holds a huge number of rows, the only factor that comes to mind is that the sparse index kept in memory for the SSTable can start to take too much memory (depending on how much memory you have). In which case you can have a look at index_interval in cassandra.yaml. But as long as you don't start seeing node EOM for no reason, this should not be a concern. 
> 
> --
> Sylvain
> 
> On Wed, Feb 16, 2011 at 9:36 AM, Sasha Dolgy <sd...@gmail.com> wrote:
>  
> is there a limit or a factor to take into account when the number of rows in a CF exceeds a certain number?  i see the columns for a row can get upwards of 2 billion ... can i have 2 billion rows without much issue?  
> 
> -- 
> Sasha Dolgy
> sasha.dolgy@gmail.com
> 


Re: limit on rows in a cf

Posted by Sylvain Lebresne <sy...@datastax.com>.
On Tue, Mar 1, 2011 at 10:36 PM, Shaun Cutts <sh...@cuttshome.net> wrote:

> This isn't quite true, I think. RandomPartitioner uses MD5. So if you had
> 10^16 rows, you would have a 10^-6 chance of a collision, according to
> http://en.wikipedia.org/wiki/Birthday_attack ... and apparently MD5 isn't
> quite balanced, so your actual odds of a collision are worse (though I'm not
> familiar with the literature).
>
> 10^16 is very large... but conceivable, I guess.
>

MD5's are used for the distribution of key to nodes. So in theory you can
have multiple keys having the same token (md5). This means they'll be sure
to go into the same node but that's all. But in all fairness, Cassandra
don't live up to the theory quite yet, and though you can have multiple keys
for the same MD5, some read operations (range_slice) will be buggy when that
happens: see https://issues.apache.org/jira/browse/CASSANDRA-1034 that
should (hopefully) be fixed soon.

What is true however is that you can't have more than 2^128 nodes with
RandomPartitioner (one for each MD5). But I'm really curious to see someone
hit that limit.
Btw, I'm not pretending Cassandra has no limit or anything that bold, merely
saying that I'm pretty sure the number of rows is not a concern.

--
Sylvain



>
> -- Shaun
>
>
> On Feb 16, 2011, at 4:05 AM, Sylvain Lebresne wrote:
>
> Sky is the limit.
>
> Columns in a row are limited to 2 billion because the size of a row is
> recorded in a java int. A row must also fit on one node, so this also limit
> in a way the size of a row (if you have large values, you could be limited
> by this factor much before reaching 2 billions columns).
>
> The number of rows is never recorded anywhere (no data type limit). And
> rows are balanced over the cluster. So there is no real limit outside what
> your cluster can handle (that is the number of machine you can afford is
> probably the limit).
>
> Now, if a single node holds a huge number of rows, the only factor that
> comes to mind is that the sparse index kept in memory for the SSTable can
> start to take too much memory (depending on how much memory you have). In
> which case you can have a look at index_interval in cassandra.yaml. But as
> long as you don't start seeing node EOM for no reason, this should not be a
> concern.
>
> --
> Sylvain
>
> On Wed, Feb 16, 2011 at 9:36 AM, Sasha Dolgy <sd...@gmail.com> wrote:
>
>>
>> is there a limit or a factor to take into account when the number of rows
>> in a CF exceeds a certain number?  i see the columns for a row can get
>> upwards of 2 billion ... can i have 2 billion rows without much issue?
>>
>> --
>> Sasha Dolgy
>> sasha.dolgy@gmail.com
>>
>
>
>