You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Yang <te...@gmail.com> on 2011/10/11 19:33:12 UTC

different size sstable on different nodes?

after I did a major compaction on both nodes in my test cluster,
I found that for the same CF, one node has a 100MB sstable file, while
the other has a 1GB one.

since GC_grace is set into schema, and both nodes have the same
config, how could this happen?

I'm still going through sstable2json to figure out, just want to see
if there are any
apparent things I missed

thanks
Yang

Re: different size sstable on different nodes?

Posted by Yang <te...@gmail.com>.
"46e70d80": [["00000132f3726cbb303030303030303030303030303030303030303030303030303030303030303030303164316366666633","4e945b0e",1318344486784,"d"]

for the timestamp
perl -e 'print gmtime(1318344486)."\n" '
Tue Oct 11 14:48:06 2011

$ TZ=GMT date
Tue Oct 11 17:40:31 GMT 2011


so it's almost 3 hours old, but I just finished running the
compaction, and GC_SECONDS is 7200 , set short for testing purpose. so
this deletion column should have been thrown away during the
compaction






On Tue, Oct 11, 2011 at 10:33 AM, Yang <te...@gmail.com> wrote:
> after I did a major compaction on both nodes in my test cluster,
> I found that for the same CF, one node has a 100MB sstable file, while
> the other has a 1GB one.
>
> since GC_grace is set into schema, and both nodes have the same
> config, how could this happen?
>
> I'm still going through sstable2json to figure out, just want to see
> if there are any
> apparent things I missed
>
> thanks
> Yang
>

Unsubscribe

Posted by Jim Zamata <ji...@digitalreasoning.com>.