You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Yuki Morishita (JIRA)" <ji...@apache.org> on 2015/05/07 21:16:00 UTC

[jira] [Commented] (CASSANDRA-9111) SSTables originated from the same incremental repair session have different repairedAt timestamps

    [ https://issues.apache.org/jira/browse/CASSANDRA-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533228#comment-14533228 ] 

Yuki Morishita commented on CASSANDRA-9111:
-------------------------------------------

Sorry for late reply.
Can you create the patch for trunk to be released in 3.x?
Since if we changed message format, repair can be hang if version is different in the cluster.

> SSTables originated from the same incremental repair session have different repairedAt timestamps
> -------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-9111
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9111
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: prmg
>         Attachments: CASSANDRA-9111-v0.txt, CASSANDRA-9111-v1.txt
>
>
> CASSANDRA-7168 optimizes QUORUM reads by skipping incrementally repaired SSTables on other replicas that were repaired on or before the maximum repairedAt timestamp of the coordinating replica's SSTables for the query partition.
> One assumption of that optimization is that SSTables originated from the same repair session in different nodes will have the same repairedAt timestamp, since the objective is to skip reading SSTables originated in the same repair session (or before).
> However, currently, each node timestamps independently SSTables originated from the same repair session, so they almost never have the same timestamp.
> Steps to reproduce the problem:
> {code}
> ccm create test
> ccm populate -n 3
> ccm start
> ccm node1 cqlsh;
> {code}
> {code:sql}
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
> CREATE TABLE foo.bar ( key int, col int, PRIMARY KEY (key) ) ;
> INSERT INTO foo.bar (key, col) VALUES (1, 1);
> exit;
> {code}
> {code}
> ccm node1 flush;
> ccm node2 flush;
> ccm node3 flush;
> nodetool -h 127.0.0.1 -p 7100 repair -par -inc foo bar
> [2015-04-02 21:56:07,726] Starting repair command #1, repairing 3 ranges for keyspace foo (parallelism=PARALLEL, full=false)
> [2015-04-02 21:56:07,816] Repair session 3655b670-d99c-11e4-b250-9107aba35569 for range (3074457345618258602,-9223372036854775808] finished
> [2015-04-02 21:56:07,816] Repair session 365a4a50-d99c-11e4-b250-9107aba35569 for range (-9223372036854775808,-3074457345618258603] finished
> [2015-04-02 21:56:07,818] Repair session 365bf800-d99c-11e4-b250-9107aba35569 for range (-3074457345618258603,3074457345618258602] finished
> [2015-04-02 21:56:07,995] Repair command #1 finished
> sstablemetadata ~/.ccm/test/node1/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db ~/.ccm/test/node2/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db ~/.ccm/test/node3/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db | grep Repaired
> Repaired at: 1428023050318
> Repaired at: 1428023050322
> Repaired at: 1428023050340
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)