You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Marcus Eriksson (JIRA)" <ji...@apache.org> on 2014/04/01 09:56:17 UTC
[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can
cause data to reappear.
[ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956205#comment-13956205 ]
Marcus Eriksson commented on CASSANDRA-6696:
--------------------------------------------
pushed a new version to https://github.com/krummas/cassandra/commits/marcuse/6696-2
* removed SSTWInterface, instead created a helper class that is reused in most places
* multithreaded flush, one thread per disk
* support multiple flush dirs
* sort compaction/flush dirs lexicographically to make sure we always put the same tokens on the same disks (even if you rearrange dirs in config etc)
* avoids compaction loops by making sure we never start STCS compactions with any sstables that don't intersect (which the sstables on different disks wont)
* RandomP and Murmur3P supported, the rest will dump data on the first disk for now
TODO:
* ask user@ for remove-OPP/BOP feedback, otherwise make them work with JBOD, in the old way
> Drive replacement in JBOD can cause data to reappear.
> ------------------------------------------------------
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: sankalp kohli
> Assignee: Marcus Eriksson
> Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one and repair is run.
> This can cause deleted data to come back in some cases. Also this is true for corrupt stables in which we delete the corrupt stable and run repair.
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days.
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.
> Then a delete/tombstone was written successfully for the same row column 15 days back.
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it got compacted with the actual data. So there is no trace of this row column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction has not yet reclaimed the data and tombstone.
> Drive2 becomes corrupt and was replaced with new empty drive.
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come back to life.
> Now after replacing the drive we run repair. This data will be propagated to all nodes.
> Note: This is still a problem even if we run repair every gc grace.
>
--
This message was sent by Atlassian JIRA
(v6.2#6252)