You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jianwei Zhang (JIRA)" <ji...@apache.org> on 2014/05/07 09:42:41 UTC

[jira] [Created] (CASSANDRA-7184) improvement of SizeTieredCompaction

Jianwei Zhang created CASSANDRA-7184:
----------------------------------------

             Summary: improvement  of  SizeTieredCompaction
                 Key: CASSANDRA-7184
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
             Project: Cassandra
          Issue Type: Improvement
          Components: Core
            Reporter: Jianwei Zhang
            Assignee: Jianwei Zhang
            Priority: Minor


1,  In our usage scenario, there is no duplicated insert and no delete . The data increased all the time, and some huge sstables generate (100GB for example).  we don't want these sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which we set to 100GB . Sstables larger than the threshold will not be compacted. Can this strategy be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction. The total size would be larger to 5TB. So during the compaction, when the size writed reach to a configed threshhold(200GB for example), it switch to write a new sstable. In this way, we avoid to generate too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)