You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2010/06/14 20:07:17 UTC

[jira] Resolved: (CASSANDRA-1130) Row iteration can stomp start-of-row mark

     [ https://issues.apache.org/jira/browse/CASSANDRA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-1130.
---------------------------------------

    Resolution: Fixed

committed, thanks!

> Row iteration can stomp start-of-row mark
> -----------------------------------------
>
>                 Key: CASSANDRA-1130
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1130
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7
>            Reporter: Jignesh Dhruv
>            Assignee: Sylvain Lebresne
>             Fix For: 0.7
>
>         Attachments: 0001-Allow-for-multiple-mark-on-a-file.patch, 0002-Unit-test-for-row-iteration.patch, cassandra_0.6-Allow_multiple_mark_on_file.diff, TestSuperColumnTTL.java, TestSuperColumnTTL.java
>
>
> Hello,
> I am trying to use TTL (timeToLive) feature in SuperColumns.
> My usecase is:
> - I have a SuperColumn and 3 subcolumns.
> - I try to expire data after 60 seconds.
> While Cassandra is up and running, I am successfully able to push and read data without any problems. Data compaction and all occurs fine. After inserting say about 100000 records, I stop Cassandra while data is still coming.
> On startup Cassandra throws an exception and won't start up. (This happens 1 in every 3 times). Exception varies like:
> - EOFException while reading data
> - negative value encountered exception
> - Heap Space Exception
> Cassandra simply won't start up.
> Again I get this problem only when I use TTL with SuperColumns. There are no issues with using TTL with regular Columns.
> I tried to diagnose the problem and it seems to happen on startup when it sees a Column that is marked Deleted and its trying to read data. Its off by some bytes and hence all these exceptions.
> Caused by: java.io.IOException: Corrupt (negative) value length encountered
>         at org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:317)
>         at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:84)
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:336)
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:285)
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.getNextBlock(SSTableSliceIterator.java:235)
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.pollColumn(SSTableSliceIterator.java:195)
>         ... 18 more
> Let me know if you need more information.
> Thanks,
> Jignesh

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.