You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Marcus Eriksson (JIRA)" <ji...@apache.org> on 2015/09/04 09:06:46 UTC
[jira] [Commented] (CASSANDRA-10253) Incremental repairs not
working as expected with DTCS
[ https://issues.apache.org/jira/browse/CASSANDRA-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730412#comment-14730412 ]
Marcus Eriksson commented on CASSANDRA-10253:
---------------------------------------------
the sstablemetadata actually looks good from an incremental repair standpoint:
||repaired || unrepaired ||
|2737|471|
|3052|437|
|2796|450|
|2746|456|
|3273|317|
|2572|384|
this means that instead of repairing ~3000 sstables per node per repair session, you are only repairing ~500 - this means that the impact of the repair is a lot smaller with incremental repair. There are general problems with vnodes and repair though: CASSANDRA-5220 and those problems are aggravated with DTCS: CASSANDRA-9644
I have to say I don't understand what the problem is in your issue 1 above though, could you elaborate?
> Incremental repairs not working as expected with DTCS
> -----------------------------------------------------
>
> Key: CASSANDRA-10253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10253
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: Pre-prod
> Reporter: vijay
> Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
> Attachments: sstablemetadata-cluster-logs.zip, systemfiles 2.zip
>
>
> HI,
> we are ingesting data 6 million records every 15 mins into one DTCS table and relaying on Cassandra for purging the data.Table Schema given below, Issue 1: we are expecting to see table sstable created on day d1 will not be compacted after d1 how we are not seeing this, how ever i see some data being purged at random intervals
> Issue 2: when we run incremental repair using "nodetool repair keyspace table -inc -pr" each sstable is splitting up to multiple smaller SStables and increasing the total storage.This behavior is same running repairs on any node and any number of times
> There are mutation drop's in the cluster
> Table:
> {code}
> CREATE TABLE TableA (
> F1 text,
> F2 int,
> createts bigint,
> stats blob,
> PRIMARY KEY ((F1,F2), createts)
> ) WITH CLUSTERING ORDER BY (createts DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'min_threshold': '12', 'max_sstable_age_days': '1', 'base_time_seconds': '50', 'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 93600
> AND gc_grace_seconds = 3600
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> Thanks
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)