You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Yuki Morishita (JIRA)" <ji...@apache.org> on 2015/03/18 17:45:39 UTC

[jira] [Commented] (CASSANDRA-8979) MerkleTree mismatch for deleted and non-existing rows

    [ https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14367434#comment-14367434 ] 

Yuki Morishita commented on CASSANDRA-8979:
-------------------------------------------

For 2.0 and PrecompactedRow, patch works as described.
Though I think we need to do the same for LazilyCompactedRow as well. (For 2.1+ we don't have PrecompactedRow anymore.)

LazilyCompactedRow does not have null ColumnFamily even when all it's cells are removed. So when we have wide rows, we still have hash mismatch between empty rows and removed rows.


> MerkleTree mismatch for deleted and non-existing rows
> -----------------------------------------------------
>
>                 Key: CASSANDRA-8979
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Stefan Podkowinski
>            Assignee: Yuki Morishita
>         Attachments: cassandra-2.0-8979-test.txt, cassandra-2.0-8979-validator_patch.txt
>
>
> Validation compaction will currently create different hashes for rows that have been deleted compared to nodes that have not seen the rows at all or have already compacted them away. 
> In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to prevent hashing of expired tombstones. This still seems to be in place, but does not address the issue completely. Or there was a change in 2.0 that rendered the patch ineffective. 
> The problem is that rowHash() in the Validator will return a new hash in any case, whether the PrecompactedRow did actually update the digest or not. This will lead to the case that a purged, PrecompactedRow will not change the digest, but we end up with a different tree compared to not having rowHash called at all (such as in case the row already doesn't exist).
> As an implication, repair jobs will constantly detect mismatches between older sstables containing purgable rows and nodes that have already compacted these rows. After transfering the reported ranges, the newly created sstables will immediately get deleted again during the following compaction. This will happen for each repair run over again until the sstable with the purgable row finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)