You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2015/03/13 00:04:40 UTC
[jira] [Commented] (CASSANDRA-7168) Add repair aware consistency
levels
[ https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359581#comment-14359581 ]
Jonathan Ellis commented on CASSANDRA-7168:
-------------------------------------------
Do we actually need to add a special ConsistencyLevel? I'd rather just apply this as an optimization to all CL > ONE, replacing the data/digest split that is almost certainly less useful.
> Add repair aware consistency levels
> -----------------------------------
>
> Key: CASSANDRA-7168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: T Jake Luciani
> Labels: performance
> Fix For: 3.0
>
>
> With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to avoid a lot of extra disk I/O when running queries with higher consistency levels.
> Since repaired data is by definition consistent and we know which sstables are repaired, we can optimize the read path by having a REPAIRED_QUORUM which breaks reads into two phases:
>
> 1) Read from one replica the result from the repaired sstables.
> 2) Read from a quorum only the un-repaired data.
> For the node performing 1) we can pipeline the call so it's a single hop.
> In the long run (assuming data is repaired regularly) we will end up with much closer to CL.ONE performance while maintaining consistency.
> Some things to figure out:
> - If repairs fail on some nodes we can have a situation where we don't have a consistent repaired state across the replicas.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)