You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jeff Jirsa (JIRA)" <ji...@apache.org> on 2015/06/15 09:17:00 UTC
[jira] [Commented] (CASSANDRA-9597) DTCS should consider file SIZE
in addition to time windowing
[ https://issues.apache.org/jira/browse/CASSANDRA-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14585527#comment-14585527 ]
Jeff Jirsa commented on CASSANDRA-9597:
---------------------------------------
You can understand why this happens when you realize that the sstables are filtered by max timestamp:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L178
And then the resulting list is sorted by min timestamp:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L357-L367
The result is that for roughly evenly distributed time periods (file size proportional to sstable maxTimestamp - sstable minTimestamp, which is likely mostly true for most DTCS workloads), larger files will always be at the front of {{trimToThreshold}}, which virtually guarantees we'll re-compact a very large sstable over and over and over if any other sstables are in the window for compaction.
> DTCS should consider file SIZE in addition to time windowing
> ------------------------------------------------------------
>
> Key: CASSANDRA-9597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9597
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Jeff Jirsa
> Priority: Minor
> Labels: dtcs
>
> DTCS seems to work well for the typical use case - writing data in perfect time order, compacting recent files, and ignoring older files.
> However, there are "normal" operational actions where DTCS will fall behind and is unlikely to recover.
> An example of this is streaming operations (for example, bootstrap or loading data into a cluster using sstableloader), where lots (tens of thousands) of very small sstables can be created spanning multiple time buckets. In these case, even if max_sstable_age_days is extended to allow the older incoming files to be compacted, the selection logic is likely to re-compact large files with fewer small files over and over, rather than prioritizing selection of max_threshold smallest files to decrease the number of candidate sstables as quickly as possible.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)