You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jianwei Zhang (JIRA)" <ji...@apache.org> on 2014/05/07 09:41:26 UTC
[jira] [Updated] (CASSANDRA-7184) improvement of
SizeTieredCompaction
[ https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jianwei Zhang updated CASSANDRA-7184:
-------------------------------------
Description:
1, In our usage scenario, there is no duplicated insert and no delete . The data increased all the time, and some big sstables are generated (100GB for example). we don't want these sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which is set to 100GB . Sstables larger than the threshold will not be compacted. Should this strategy be added to the trunk ?
2, In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction. The total size would be larger to 5TB. So during the compaction, when the size writed reach to a configed threshhold(200GB for example), it switch to write a new sstable. In this way, we avoid to generate too huge sstables. Too huge sstable have some bad infuence:
(1) It will be larger than the capacity of a disk;
(2) If the sstable is corrupt, lots of objects will be influenced .
Should this strategy be added to the trunk ?
was:
1, In our usage scenario, there is no duplicated insert and no delete . The data increased all the time, and some big sstables are generated (100GB for example). we don't want these sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which is set to 100GB . Sstables larger than the threshold will not be compacted. Can this strategy be added to the trunk ?
2, In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction. The total size would be larger to 5TB. So during the compaction, when the size writed reach to a configed threshhold(200GB for example), it switch to write a new sstable. In this way, we avoid to generate too huge sstables. Too huge sstable have some bad infuence:
(1) It will be larger than the capacity of a disk;
(2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?
> improvement of SizeTieredCompaction
> -------------------------------------
>
> Key: CASSANDRA-7184
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: Jianwei Zhang
> Assignee: Jianwei Zhang
> Priority: Minor
> Labels: compaction
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> 1, In our usage scenario, there is no duplicated insert and no delete . The data increased all the time, and some big sstables are generated (100GB for example). we don't want these sstables to participate in the SizeTieredCompaction any more. so we add a max threshold which is set to 100GB . Sstables larger than the threshold will not be compacted. Should this strategy be added to the trunk ?
> 2, In our usage scenario, maybe hundreds of sstable need to be compacted in a Major Compaction. The total size would be larger to 5TB. So during the compaction, when the size writed reach to a configed threshhold(200GB for example), it switch to write a new sstable. In this way, we avoid to generate too huge sstables. Too huge sstable have some bad infuence:
> (1) It will be larger than the capacity of a disk;
> (2) If the sstable is corrupt, lots of objects will be influenced .
> Should this strategy be added to the trunk ?
--
This message was sent by Atlassian JIRA
(v6.2#6252)