You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Benedict (JIRA)" <ji...@apache.org> on 2014/02/12 21:05:19 UTC

[jira] [Comment Edited] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

    [ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13899519#comment-13899519 ] 

Benedict edited comment on CASSANDRA-6696 at 2/12/14 8:04 PM:
--------------------------------------------------------------

bq. if you lose "data" then you scrub/repair; if you lose metadata you rebuild it from data.

You'd always have to do both with any single disk failure. But I agree it isn't optimal; but it is cost-free to maintain, so is just essentially an optimisation + automated process to downgrade the node in the event of failure without having to manually rebuild it. 

Simply redundantly writing out the metadata would change it to a more uniform process, and tolerant to more than one failure, but at increased cost; at which point you might as well redundantly write out tombstones - either as a bloom filter or an extra sstable. The latter could be complicated to maintain cheaply and safely though. For multiple disk failures I'd say, if you have configured the auto-downgrading to happen - it should just trash everything it has and (optionally) repair.



was (Author: benedict):
bq. if you lose "data" then you scrub/repair; if you lose metadata you rebuild it from data.

You'd always have to do both with any single disk failure. But I agree it isn't optimal; but it is cost-free to maintain. Simply redundantly writing out the metadata would change it to a more uniform process, and tolerant to more than one failure, but at increased cost; at which point you might as well redundantly write out tombstones - either as a bloom filter or an extra sstable. The latter could be complicated to maintain cheaply and safely though.


> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: sankalp kohli
>             Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it got compacted with the actual data. So there is no trace of this row column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)