You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2014/04/22 14:54:19 UTC
[jira] [Comment Edited] (CASSANDRA-6696) Drive replacement in JBOD
can cause data to reappear.
[ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976712#comment-13976712 ]
Jonathan Ellis edited comment on CASSANDRA-6696 at 4/22/14 12:53 PM:
---------------------------------------------------------------------
bq. doing per-vnode sstables could enable some nice benefits, like turning off the exact vnodes that are affected by a disk failure or a mini auto-repair on corrupt sstables perhaps?
CASSANDRA-4784 lists some other benefits, the strongest of which I think are
# on disk failure, we can invalidate the affected vnodes and repair them, rather than continuing to serve incomplete data or halting the entire node [similar to what you are saying here]
# we can deduplicate ranges for bulk load into another cluster (CASSANDRA-4756)
/cc [~kohlisankalp]
was (Author: jbellis):
bq. doing per-vnode sstables could enable some nice benefits, like turning off the exact vnodes that are affected by a disk failure or a mini auto-repair on corrupt sstables perhaps?
CASSANDRA-4784 lists some other benefits, the strongest of which I think are
# on disk failure, we can invalidate the affected vnodes and repair them, rather than continuing to serve incomplete data or halting the entire node [similar to what you are saying here]
# we can deduplicate ranges for bulk load into another cluster (CASSANDRA-4756)
> Drive replacement in JBOD can cause data to reappear.
> ------------------------------------------------------
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: sankalp kohli
> Assignee: Marcus Eriksson
> Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one and repair is run.
> This can cause deleted data to come back in some cases. Also this is true for corrupt stables in which we delete the corrupt stable and run repair.
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days.
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.
> Then a delete/tombstone was written successfully for the same row column 15 days back.
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it got compacted with the actual data. So there is no trace of this row column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction has not yet reclaimed the data and tombstone.
> Drive2 becomes corrupt and was replaced with new empty drive.
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come back to life.
> Now after replacing the drive we run repair. This data will be propagated to all nodes.
> Note: This is still a problem even if we run repair every gc grace.
>
--
This message was sent by Atlassian JIRA
(v6.2#6252)