You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Sylvain Lebresne <sy...@yakaz.com> on 2010/04/30 14:53:12 UTC

Re: why the sum of all the nodes' loads is much bigger than the size of the inserted data?

I believe one of the reason is all the metadata. As far as I
understand what you said,
you have 500 millions rows with each having only one column. The
problem is that
a row have a bunch of metadata: a bloom filter, a column index plus a
few other bytes
to store the number of column, if the row is marked to be deleted and such.
In you case, the index will have one entry but this entry includes
twice the name of
the column and 2 other long.
As for the column itself, you said it is 110 bytes but maybe haven't
you counted the timestamp,
and each column has a flag saying if it is a tombstone or not.

In the end, I don't know how your column size splits between the
column key and the column value,
but I wouldn't be surprise that the math add up at the end.
Note that if you had say 5 millions rows each having 100 columns, you
would have much less
metadata and I bet you would en up with much less disk used.

On Fri, Apr 30, 2010 at 9:24 AM, Bingbing Liu <ru...@gmail.com> wrote:
> i insert 500,000,000 rows each of which has a key of 20 bytes and a column of 110 bytes.
>
> and the repilcationfactor is set to 3, so i expect the load of the cluster should be 0.5 billion * 130 * 3 = 195 G bytes.
>
> but in the fact the load i get through "nodetool -h localhost ring" is about 443G.
>
> i think there is some other additional datas such as index , checksum ,and the column name be stored.
>
> but am i right ? is that all ?  why the difference is so big ?
>
> hope i have explained my problem clearly
>
>
>
> 2010-04-30
> ________________________________
> Bingbing Liu

Re: Re: why the sum of all the nodes' loads is much bigger than the sizeof the inserted data?

Posted by Bingbing Liu <ru...@gmail.com>.
thanks 

according to your explanation , the result sounds reasonable


thanks again~~~


2010-04-30 



Bingbing Liu 



发件人: Sylvain Lebresne 
发送时间: 2010-04-30  20:54:04 
收件人: user 
抄送: 
主题: Re: why the sum of all the nodes' loads is much bigger than the sizeof the inserted data? 
 
I believe one of the reason is all the metadata. As far as I
understand what you said,
you have 500 millions rows with each having only one column. The
problem is that
a row have a bunch of metadata: a bloom filter, a column index plus a
few other bytes
to store the number of column, if the row is marked to be deleted and such.
In you case, the index will have one entry but this entry includes
twice the name of
the column and 2 other long.
As for the column itself, you said it is 110 bytes but maybe haven't
you counted the timestamp,
and each column has a flag saying if it is a tombstone or not.
In the end, I don't know how your column size splits between the
column key and the column value,
but I wouldn't be surprise that the math add up at the end.
Note that if you had say 5 millions rows each having 100 columns, you
would have much less
metadata and I bet you would en up with much less disk used.
On Fri, Apr 30, 2010 at 9:24 AM, Bingbing Liu <ru...@gmail.com> wrote:
> i insert 500,000,000 rows each of which has a key of 20 bytes and a column of 110 bytes.
>
> and the repilcationfactor is set to 3, so i expect the load of the cluster should be 0.5 billion * 130 * 3 = 195 G bytes.
>
> but in the fact the load i get through "nodetool -h localhost ring" is about 443G.
>
> i think there is some other additional datas such as index , checksum ,and the column name be stored.
>
> but am i right ? is that all ?  why the difference is so big ?
>
> hope i have explained my problem clearly
>
>
>
> 2010-04-30
> ________________________________
> Bingbing Liu