You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Roland Gude (JIRA)" <ji...@apache.org> on 2012/09/20 10:59:07 UTC

[jira] [Commented] (CASSANDRA-4670) LeveledCompaction destroys secondary indexes

    [ https://issues.apache.org/jira/browse/CASSANDRA-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13459463#comment-13459463 ] 

Roland Gude commented on CASSANDRA-4670:
----------------------------------------

after restarting the cluster in debug mode the issue seems to have vanished, but unfortunately i have witnessed the same issue on another cluster using SizeTieredCompaction (It worked for roughly two days until it showed the issue).



                
> LeveledCompaction destroys secondary indexes
> --------------------------------------------
>
>                 Key: CASSANDRA-4670
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4670
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.4, 1.1.5
>            Reporter: Roland Gude
>
> When LeveledCompactionStrategy is active on a ColumnFamily with an Index enabled on TTL Columns, the Index is not working correctly, because the compaction is throwing away index data very aggressively.
> Steps to reproduce:
> create a cluster  with a columnfamily with an indexed column and leveled compaction:
> create column family CorruptIndex
>   with column_type = 'Standard'
>   and comparator = 'TimeUUIDType'
>   and default_validation_class = 'BytesType'
>   and key_validation_class = 'BytesType'
>   and read_repair_chance = 0.5
>   and dclocal_read_repair_chance = 0.0
>   and gc_grace = 864000
>   and min_compaction_threshold = 4
>   and max_compaction_threshold = 32
>   and replicate_on_write = true
>   and compaction_strategy = 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
>   and caching = 'NONE'
>   and column_metadata = [
>     {column_name : '00000003-0000-1000-0000-000000000000',
>     validation_class : BytesType,
>     index_name : 'idx_corrupt',
>     index_type : 0}];
> in that cf insert expiring data (expiration date should be in the far future for the sake of this test)
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (should be correct for some time)
> wait for leveled compaction to compact the index
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (are empty)
> trigger rebuild index via nodetool
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> should be corretc again
> wait for leveled compaction to compact the index
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (are empty)
> repeat until bored

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira