You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jason Harvey (JIRA)" <ji...@apache.org> on 2011/03/09 02:41:59 UTC

[jira] Created: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Scrub resulting in "bloom filter claims to be longer than entire row size" error
--------------------------------------------------------------------------------

                 Key: CASSANDRA-2296
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
             Project: Cassandra
          Issue Type: Bug
            Reporter: Jason Harvey


Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
{code}
 WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
 WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
        at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
        at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
        at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
        at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
        ... 8 more
 WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
 INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
 WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004350#comment-13004350 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

Scrub writes a zero-length row when tombstones expire and there is nothing left, instead of writing no row at all.  So, as the clock rolls forwards and more tombstones expire, you will usually get a few more zero-length rows written, that will be cleaned out by the next scrub.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Sylvain Lebresne (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004444#comment-13004444 ] 

Sylvain Lebresne commented on CASSANDRA-2296:
---------------------------------------------

In the retry part, the goodRows++ after the if should be removed to avoid counting rows twice.

Other than this, +1

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004343#comment-13004343 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

With debug logging turned on it looks like this:

{noformat}
[lots of rows around 119 bytes long]
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,251 CompactionManager.java (line 559) row 337a306f615f666c38756a is 119 bytes
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,251 CompactionManager.java (line 588) Index doublecheck: row 337a306f615f666c38756a is 119 bytes
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,251 CompactionManager.java (line 550) Reading row at 44385
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,251 CompactionManager.java (line 559) row 34306536785f666f666b65 is 0 bytes
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,252 CompactionManager.java (line 588) Index doublecheck: row 34306536785f666f666b65 is 0 bytes
 WARN [CompactionExecutor:1] 2011-03-08 20:34:12,253 CompactionManager.java (line 606) Non-fatal error reading row (stacktrace follows)
java.io.IOError: java.io.EOFException: bloom filter claims to be 734305 bytes, longer than entire row size 0
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:125)
        at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:597)
        at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:57)
        at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:196)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:680)
Caused by: java.io.EOFException: bloom filter claims to be 734305 bytes, longer than entire row size 0
        at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:95)
        ... 8 more
 WARN [CompactionExecutor:1] 2011-03-08 20:34:12,255 CompactionManager.java (line 632) Row is unreadable; skipping to next
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,255 CompactionManager.java (line 550) Reading row at 44406
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,255 CompactionManager.java (line 559) row 34616465655f66707a6178 is 119 bytes
DEBUG [CompactionExecutor:1] 2011-03-08 20:34:12,255 CompactionManager.java (line 588) Index doublecheck: row 34616465655f66707a6178 is 119 bytes
[lots more rows around 119 bytes]
{noformat}

In other words: there's an row that's empty except for the key, which is causing the problem because we're not supposed to write rows like that.  I checked with a hex editor and that's what it looks like.

The good news is that scrub is correctly skipping it and recovering everything else fine.

The bad news is we have (or possibly, had) a bug that was causing those empty rows to be written.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004928#comment-13004928 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

Looks like a different problem. What is the context at debug level?

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004939#comment-13004939 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

I'll check the debug output on it tomorrow. I should note that I ran a scrub on this same set of data yesterday on 0.7.3. I got two errors regarding another CF, but nothing for the CF which is now complaining.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Updated: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jason Harvey updated CASSANDRA-2296:
------------------------------------

    Attachment: sstable_part2.tar.bz2
                sstable_part1.tar.bz2

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004897#comment-13004897 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

I applied the patch and retried. Getting a new exception. Thousands of this:

{code}
 WARN [CompactionExecutor:1] 2011-03-09 17:29:59,752 CompactionManager.java (line 641) Row at 517805025 is unreadable; skipping to next
 INFO [CompactionExecutor:1] 2011-03-09 17:29:59,752 SSTableWriter.java (line 108) Last written key : DecoratedKey(125686934811414729670440675125192621396, 627975726c2833626333626339353363353762313133373331336461303233396438303534312c66692e676f73757065726d6f64656c2e636f6d2f70726f66696c65732f2f6170706c65747265713d3132373333393332313937363529)
 INFO [CompactionExecutor:1] 2011-03-09 17:29:59,752 SSTableWriter.java (line 109) Current key : DecoratedKey(11081980355438931816706032048128862258, 30303063623061323633313463653465376663333561303531326333653737363333663065646134)
 INFO [CompactionExecutor:1] 2011-03-09 17:29:59,752 SSTableWriter.java (line 110) Writing into file /var/lib/cassandra/data/reddit/permacache-tmp-f-168615-Data.db
 WARN [CompactionExecutor:1] 2011-03-09 17:29:59,752 CompactionManager.java (line 607) Non-fatal error reading row (stacktrace follows)
java.io.IOException: Keys must be written in ascending order.
        at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:111)
        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:128)
        at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:598)
        at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
        at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)
{code}

Keys are getting written back improperly?

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004648#comment-13004648 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

also added test in r1079882

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005253#comment-13005253 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

Disregard. Getting the same thing on unpatched 0.7.3. I'll create a separate bug report.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004585#comment-13004585 ] 

Hudson commented on CASSANDRA-2296:
-----------------------------------

Integrated in Cassandra-0.7 #365 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/365/])
    avoid writing empty rows when scrubbing tombstoned rows
patch by jbellis; reviewed by slebresne for CASSANDRA-2296


> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004345#comment-13004345 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

added asserts to catch zero-length rows in r1079650

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004351#comment-13004351 ] 

Hudson commented on CASSANDRA-2296:
-----------------------------------

Integrated in Cassandra-0.7 #362 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/362/])
    add asserts to make sure we don't write zero-length rows; see CASSANDRA-2296


> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004576#comment-13004576 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

committed w/ ++ fix

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004348#comment-13004348 ] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

The first scrub ended with

{noformat}
 INFO [CompactionExecutor:1] 2011-03-08 20:45:39,174 CompactionManager.java (line 644) Scrub of SST\
ableReader(path='/var/lib/cassandra/data/KS1/Hide-f-671-Data.db') complete: 254709 rows in new ssta\
ble
 WARN [CompactionExecutor:1] 2011-03-08 20:45:39,174 CompactionManager.java (line 646) Unable to re\
cover 1630 rows that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  Y\
ou can also run nodetool repair to transfer the data from a healthy replica, if any
{noformat}

Scrubbing the scrubbed version again, ended with

{noformat}
 INFO 21:11:01,349 Scrub of SSTableReader(path='/var/lib/cassandra/data/KS1/Hide-f-672-Data.db') complete: 253308 rows in new sstable
 WARN 21:11:01,349 Unable to recover 1401 rows that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
{noformat}

Scrub is eating rows.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Resolved: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-2296.
---------------------------------------

    Resolution: Fixed

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Updated: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis updated CASSANDRA-2296:
--------------------------------------

    Attachment: 2296.txt

fix attached.  now skips tombstoned rows properly w/o leaving stubs in the new sstable.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Updated: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis updated CASSANDRA-2296:
--------------------------------------

             Reviewer: slebresne
          Component/s: Tools
    Affects Version/s: 0.7.3
        Fix Version/s: 0.7.4
             Assignee: Jonathan Ellis

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Updated: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis updated CASSANDRA-2296:
--------------------------------------

    Remaining Estimate: 2h
     Original Estimate: 2h

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004347#comment-13004347 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

Got the following error while restarting *after* I ran the scrub on that same node:

{code}
ERROR [CompactionExecutor:1] 2011-03-08 19:54:48,023 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[CompactionExecutor:1,1,main]
java.io.IOError: java.io.EOFException
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
        at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:67)
        at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:179)
        at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:144)
        at org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:136)
        at org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:39)
        at org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284)
        at org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326)
        at org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230)
        at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
        at org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
        at org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
        at org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:449)
        at org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:124)
        at org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:94)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
{code}

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error

Posted by "Jason Harvey (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005239#comment-13005239 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

Here is the debug output. Going to get a comparison on the unpatched 0.7.3 to see if there is any difference.

{code}
DEBUG 11:50:52,510 Reading row at 504216964
DEBUG 11:50:52,510 row 636f6d6d656e74735f706172656e74735f3233383135363235 is 66 bytes
DEBUG 11:50:52,510 Index doublecheck: row 636f6d6d656e74735f706172656e74735f3233383135363235 is 66 bytes
 INFO 11:50:52,511 Last written key : DecoratedKey(125686934811414729670440675125192621396, 627975726c2833626333626339353363353762313133373331336461303233396438303534312c66692e676f73757065726d6f64656c2e636f6d2f70726f66696c65732f2f6170706c65747265713d3132373333393332313937363529)
 INFO 11:50:52,511 Current key : DecoratedKey(11047858886149374835950241979723972473, 636f6d6d656e74735f706172656e74735f3233383135363235)
 INFO 11:50:52,511 Writing into file /var/lib/cassandra/data/reddit/permacache-tmp-f-168492-Data.db
 WARN 11:50:52,511 Non-fatal error reading row (stacktrace follows)
java.io.IOException: Keys must be written in ascending order.
        at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:111)
        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:128)
        at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:598)
        at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
        at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)
{code}



> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637) Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639) Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira