You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Karl Hiramoto <ka...@hiramoto.org> on 2011/03/08 12:34:47 UTC

0.7.3 nodetool scrub exceptions

I have 1000's of these in the log  is this normal?

java.io.IOError: java.io.EOFException: bloom filter claims to be longer 
than entire row size
         at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
         at 
org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
         at 
org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
         at 
org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
         at 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at java.lang.Thread.run(Thread.java:636)
Caused by: java.io.EOFException: bloom filter claims to be longer than 
entire row size
         at 
org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
         at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
         ... 8 more
  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 
CompactionManager.java (line 625) Row is unreadable; skipping to next
  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 
CompactionManager.java (line 599) Non-fatal error reading row 
(stacktrace follows)
java.io.IOError: java.io.EOFException: bloom filter claims to be longer 
than entire row size
         at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
         at 
org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
         at 
org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
         at 
org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
         at 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at java.lang.Thread.run(Thread.java:636)
Caused by: java.io.EOFException: bloom filter claims to be longer than 
entire row size
         at 
org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
         at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
         ... 8 more
  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 
CompactionManager.java (line 625) Row is unreadable; skipping to next
  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 
CompactionManager.java (line 599) Non-fatal error reading row 
(stacktrace follows)
java.io.IOError: java.io.EOFException: bloom filter claims to be longer 
than entire row size
         at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
         at 
org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
         at 
org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
         at 
org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
         at 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at java.lang.Thread.run(Thread.java:636)
Caused by: java.io.EOFException: bloom filter claims to be longer than 
entire row size
         at 
org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
         at org.apa

Re: 0.7.3 nodetool scrub exceptions

Posted by Terje Marthinussen <tm...@gmail.com>.
I had similar errors in late 0.7.3 releases related to testing I did for the
mails with subject "Argh: Data Corruption (LOST DATA) (0.7.0)".

I do not see these corruptions or the above error anymore with 0.7.3 release
as long as the dataset is created from scratch. The patch (2104) mentioned
in the "Argh" mail was already in the code I used though, so not entirely
sure what have fixed it, if it is fixed....

We have done one change in our data at the same time though as we broke up a
very long row in smaller rows. This could be related as well.

Terje


On Wed, Mar 9, 2011 at 5:45 AM, Sylvain Lebresne <sy...@datastax.com>wrote:

> Did you run scrub as soon as you updated to 0.7.3 ?
>
> And did you had problems/exceptions before running scrub ?
> If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?
>
> If the problems started with running scrub, since it takes a snapshot
> before running, can you try restarting a test cluster with this snapshot
> and see if a simple compaction work for instance.
>
> --
> Sylvain
>
>
> On Tue, Mar 8, 2011 at 5:31 PM, Karl Hiramoto <ka...@hiramoto.org> wrote:
>
>> On 08/03/2011 17:09, Jonathan Ellis wrote:
>>
>>> No.
>>>
>>> What is the history of your cluster?
>>>
>> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
>> 0.7.3  within a few days after each was released.
>>
>> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
>> row/column has a TTL of 24 hours.
>> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>>
>>
>> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
>> reduced the key cache from the default 200000 to 1000 and increased the heap
>> size from 8GB to 12GB and the OOM crashes went away.
>>
>>
>> Anyway to fix this without throwing away all the data?
>>
>> Since i only keep data 24 hours,  I could insert into two CF for the next
>> 24 hours than after only valid data in new CF remove the old CF.
>>
>>
>>
>>
>>  On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>>>
>>>> I have 1000's of these in the log  is this normal?
>>>>
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>        ... 8 more
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 625) Row is unreadable; skipping to next
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>        ... 8 more
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 625) Row is unreadable; skipping to next
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at org.apa
>>>>
>>>>
>>>
>>>
>>
>

Re: 0.7.3 nodetool scrub exceptions

Posted by Karl Hiramoto <ka...@hiramoto.org>.
On 03/08/11 21:45, Sylvain Lebresne wrote:
> Did you run scrub as soon as you updated to 0.7.3 ?
>
Yes, whithin a few minutes of starting up 0.7.3 on the node

> And did you had problems/exceptions before running scrub ?
Not sure.
> If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?
>
Had problems with both, but the exceptions in 0.7.2 were rare. 0.7.3
there happening pretty regularly  and OOM errors occuring now.



> If the problems started with running scrub, since it takes a snapshot
> before running, can you try restarting a test cluster with this snapshot
> and see if a simple compaction work for instance.
>

Since the snapshot is stale I can't  do that.   The reads/writes seem to
work fine.



--
Karl

Re: 0.7.3 nodetool scrub exceptions

Posted by Sylvain Lebresne <sy...@datastax.com>.
Did you run scrub as soon as you updated to 0.7.3 ?

And did you had problems/exceptions before running scrub ?
If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?

If the problems started with running scrub, since it takes a snapshot
before running, can you try restarting a test cluster with this snapshot
and see if a simple compaction work for instance.

--
Sylvain


On Tue, Mar 8, 2011 at 5:31 PM, Karl Hiramoto <ka...@hiramoto.org> wrote:

> On 08/03/2011 17:09, Jonathan Ellis wrote:
>
>> No.
>>
>> What is the history of your cluster?
>>
> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
> 0.7.3  within a few days after each was released.
>
> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
> row/column has a TTL of 24 hours.
> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>
>
> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
> reduced the key cache from the default 200000 to 1000 and increased the heap
> size from 8GB to 12GB and the OOM crashes went away.
>
>
> Anyway to fix this without throwing away all the data?
>
> Since i only keep data 24 hours,  I could insert into two CF for the next
> 24 hours than after only valid data in new CF remove the old CF.
>
>
>
>
>  On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>>
>>> I have 1000's of these in the log  is this normal?
>>>
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at org.apa
>>>
>>>
>>
>>
>

Re: 0.7.3 nodetool scrub exceptions

Posted by Jonathan Ellis <jb...@gmail.com>.
Looks like it is harmless -- Scrub would write a zero-length row when
tombstones expire and there is nothing left, instead of writing no row
at all. Fix attached to the jira ticket.

On Tue, Mar 8, 2011 at 8:58 PM, Jonathan Ellis <jb...@gmail.com> wrote:
> It *may* be harmless depending on where those zero-length rows are
> coming from.  I've added asserts to 0.7 branch that fire if we attempt
> to write a zero-length row, so if the bug is still present in 0.7.3+
> that should catch it.
>
> On Tue, Mar 8, 2011 at 7:31 PM, Jonathan Ellis <jb...@gmail.com> wrote:
>> alienth on irc is reporting the same error.  His path was 0.6.8 to
>> 0.7.1 to 0.7.3.
>>
>> It's probably a bug in scrub.  If we can get an sstable exhibiting the
>> problem posted here or on Jira that would help troubleshoot.
>>
>> On Tue, Mar 8, 2011 at 10:31 AM, Karl Hiramoto <ka...@hiramoto.org> wrote:
>>> On 08/03/2011 17:09, Jonathan Ellis wrote:
>>>>
>>>> No.
>>>>
>>>> What is the history of your cluster?
>>>
>>> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
>>> 0.7.3  within a few days after each was released.
>>>
>>> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
>>> row/column has a TTL of 24 hours.
>>> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>>>
>>>
>>> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
>>> reduced the key cache from the default 200000 to 1000 and increased the heap
>>> size from 8GB to 12GB and the OOM crashes went away.
>>>
>>>
>>> Anyway to fix this without throwing away all the data?
>>>
>>> Since i only keep data 24 hours,  I could insert into two CF for the next 24
>>> hours than after only valid data in new CF remove the old CF.
>>>
>>>
>>>
>>>> On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>>>>>
>>>>> I have 1000's of these in the log  is this normal?
>>>>>
>>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>>> than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>>        at
>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>        at java.lang.Thread.run(Thread.java:636)
>>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>>        ... 8 more
>>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>>> CompactionManager.java
>>>>> (line 625) Row is unreadable; skipping to next
>>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>>> CompactionManager.java
>>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>>> than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>>        at
>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>        at java.lang.Thread.run(Thread.java:636)
>>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>>        ... 8 more
>>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>>> CompactionManager.java
>>>>> (line 625) Row is unreadable; skipping to next
>>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>>> CompactionManager.java
>>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>>> than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>>        at
>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>>        at
>>>>>
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>        at java.lang.Thread.run(Thread.java:636)
>>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>>> entire row size
>>>>>        at
>>>>>
>>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>>        at org.apa
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: 0.7.3 nodetool scrub exceptions

Posted by Jonathan Ellis <jb...@gmail.com>.
Turn on debug logging and see if the output looks like what I posted
to https://issues.apache.org/jira/browse/CASSANDRA-2296

It *may* be harmless depending on where those zero-length rows are
coming from.  I've added asserts to 0.7 branch that fire if we attempt
to write a zero-length row, so if the bug is still present in 0.7.3+
that should catch it.

On Tue, Mar 8, 2011 at 7:31 PM, Jonathan Ellis <jb...@gmail.com> wrote:
> alienth on irc is reporting the same error.  His path was 0.6.8 to
> 0.7.1 to 0.7.3.
>
> It's probably a bug in scrub.  If we can get an sstable exhibiting the
> problem posted here or on Jira that would help troubleshoot.
>
> On Tue, Mar 8, 2011 at 10:31 AM, Karl Hiramoto <ka...@hiramoto.org> wrote:
>> On 08/03/2011 17:09, Jonathan Ellis wrote:
>>>
>>> No.
>>>
>>> What is the history of your cluster?
>>
>> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
>> 0.7.3  within a few days after each was released.
>>
>> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
>> row/column has a TTL of 24 hours.
>> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>>
>>
>> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
>> reduced the key cache from the default 200000 to 1000 and increased the heap
>> size from 8GB to 12GB and the OOM crashes went away.
>>
>>
>> Anyway to fix this without throwing away all the data?
>>
>> Since i only keep data 24 hours,  I could insert into two CF for the next 24
>> hours than after only valid data in new CF remove the old CF.
>>
>>
>>
>>> On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>>>>
>>>> I have 1000's of these in the log  is this normal?
>>>>
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>        ... 8 more
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 625) Row is unreadable; skipping to next
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>>        ... 8 more
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 625) Row is unreadable; skipping to next
>>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>>> CompactionManager.java
>>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>>> than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>>        at
>>>>
>>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>>        at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>        at
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>        at java.lang.Thread.run(Thread.java:636)
>>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>>> entire row size
>>>>        at
>>>>
>>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>>        at org.apa
>>>>
>>>
>>>
>>
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: 0.7.3 nodetool scrub exceptions

Posted by Jonathan Ellis <jb...@gmail.com>.
alienth on irc is reporting the same error.  His path was 0.6.8 to
0.7.1 to 0.7.3.

It's probably a bug in scrub.  If we can get an sstable exhibiting the
problem posted here or on Jira that would help troubleshoot.

On Tue, Mar 8, 2011 at 10:31 AM, Karl Hiramoto <ka...@hiramoto.org> wrote:
> On 08/03/2011 17:09, Jonathan Ellis wrote:
>>
>> No.
>>
>> What is the history of your cluster?
>
> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
> 0.7.3  within a few days after each was released.
>
> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
> row/column has a TTL of 24 hours.
> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>
>
> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
> reduced the key cache from the default 200000 to 1000 and increased the heap
> size from 8GB to 12GB and the OOM crashes went away.
>
>
> Anyway to fix this without throwing away all the data?
>
> Since i only keep data 24 hours,  I could insert into two CF for the next 24
> hours than after only valid data in new CF remove the old CF.
>
>
>
>> On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>>>
>>> I have 1000's of these in the log  is this normal?
>>>
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at org.apa
>>>
>>
>>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Re: 0.7.3 nodetool scrub exceptions

Posted by Karl Hiramoto <ka...@hiramoto.org>.
On 08/03/2011 17:09, Jonathan Ellis wrote:
> No.
>
> What is the history of your cluster?
It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2, 
0.7.3  within a few days after each was released.

I have 6 nodes about 10GB of data each RF=2.   Only one CF every   
row/column has a TTL of 24 hours.
I do a staggered  repair/compact/cleanup across every node in a cronjob.


After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     
I reduced the key cache from the default 200000 to 1000 and increased 
the heap size from 8GB to 12GB and the OOM crashes went away.


Anyway to fix this without throwing away all the data?

Since i only keep data 24 hours,  I could insert into two CF for the 
next 24 hours than after only valid data in new CF remove the old CF.



> On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<ka...@hiramoto.org>  wrote:
>> I have 1000's of these in the log  is this normal?
>>
>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>         at
>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>         at
>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>         at
>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>         at java.lang.Thread.run(Thread.java:636)
>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>         at
>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>         ... 8 more
>>   WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
>> (line 625) Row is unreadable; skipping to next
>>   WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
>> (line 599) Non-fatal error reading row (stacktrace follows)
>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>         at
>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>         at
>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>         at
>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>         at java.lang.Thread.run(Thread.java:636)
>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>         at
>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>         ... 8 more
>>   WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
>> (line 625) Row is unreadable; skipping to next
>>   WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
>> (line 599) Non-fatal error reading row (stacktrace follows)
>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>         at
>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>         at
>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>         at
>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>         at java.lang.Thread.run(Thread.java:636)
>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>> entire row size
>>         at
>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>         at org.apa
>>
>
>


Re: 0.7.3 nodetool scrub exceptions

Posted by Jonathan Ellis <jb...@gmail.com>.
No.

What is the history of your cluster?

On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto <ka...@hiramoto.org> wrote:
> I have 1000's of these in the log  is this normal?
>
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>        at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>        at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>        at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>        at java.lang.Thread.run(Thread.java:636)
> Caused by: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>        at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>        ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
> (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
> (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>        at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>        at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>        at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>        at java.lang.Thread.run(Thread.java:636)
> Caused by: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>        at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>        ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
> (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615 CompactionManager.java
> (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>        at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>        at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>        at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>        at java.lang.Thread.run(Thread.java:636)
> Caused by: java.io.EOFException: bloom filter claims to be longer than
> entire row size
>        at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>        at org.apa
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com