You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Gianluca Borello <gi...@draios.com> on 2015/02/28 03:01:22 UTC

Error on nodetool cleanup

Hello,

I have a cluster of four nodes running 2.0.12. I added one more node and
then went on with the cleanup procedure on the other four nodes, but I get
this error (the same error on each node):

 INFO [CompactionExecutor:10] 2015-02-28 01:55:15,097
CompactionManager.java (line 619) Cleaned up to
/raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-tmp-jb-432-Data.db.
 8,253,257 to 8,253,257 (~100% of original) bytes for 5 keys.  Time: 304ms.
 INFO [CompactionExecutor:10] 2015-02-28 01:55:15,100
CompactionManager.java (line 563) Cleaning up
SSTableReader(path='/raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-jb-431-Data.db')
ERROR [CompactionExecutor:10] 2015-02-28 01:55:15,102 CassandraDaemon.java
(line 199) Exception in thread Thread[CompactionExecutor:10,1,main]
java.lang.AssertionError: Memory was freed
        at
org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
        at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
        at
org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
        at
org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
        at
org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
        at
org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:602)
        at
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:947)
        at
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:910)
        at
org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:819)
        at
org.apache.cassandra.db.ColumnFamilyStore.getExpectedCompactedFileSize(ColumnFamilyStore.java:1088)
        at
org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:564)
        at
org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
        at
org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
        at
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
 INFO [FlushWriter:1] 2015-02-28 01:55:15,111 Memtable.java (line 398)
Completed flushing
/raid0/cassandra/data/draios/mounted_fs_by_agent1/draios-mounted_fs_by_agent1-jb-132895-Data.db
(2513856 bytes) for commitlog position
ReplayPosition(segmentId=1425088070445, position=2041)

This happens with all column families, and they are not particularly big if
that matters.

How can I reclaim the free space for which I expanded the cluster in the
first place?

Thank you

Re: Error on nodetool cleanup

Posted by Gianluca Borello <gi...@draios.com>.
Thanks a lot for pointing this out! Yes, a workaround would be very much
appreciated, or also an ETA for 2.0.13, so that I could decide whether or
not going for an officially unsupported 2.0.12 to 2.0.11 downgrade, since I
really need that cleanup.

Thanks
On Feb 27, 2015 10:53 PM, "Jeff Wehrwein" <je...@refresh.io> wrote:

> We had the exact same problem, and found this bug:
> https://issues.apache.org/jira/browse/CASSANDRA-8716.  It's fixed in
> 2.0.13 (unreleased), but we haven't found a workaround for the interim.
> Please share if you find one!
>
> Thanks,
> Jeff
>
> On Fri, Feb 27, 2015 at 6:01 PM, Gianluca Borello <gi...@draios.com>
> wrote:
>
>> Hello,
>>
>> I have a cluster of four nodes running 2.0.12. I added one more node and
>> then went on with the cleanup procedure on the other four nodes, but I get
>> this error (the same error on each node):
>>
>>  INFO [CompactionExecutor:10] 2015-02-28 01:55:15,097
>> CompactionManager.java (line 619) Cleaned up to
>> /raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-tmp-jb-432-Data.db.
>>  8,253,257 to 8,253,257 (~100% of original) bytes for 5 keys.  Time: 304ms.
>>  INFO [CompactionExecutor:10] 2015-02-28 01:55:15,100
>> CompactionManager.java (line 563) Cleaning up
>> SSTableReader(path='/raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-jb-431-Data.db')
>> ERROR [CompactionExecutor:10] 2015-02-28 01:55:15,102
>> CassandraDaemon.java (line 199) Exception in thread
>> Thread[CompactionExecutor:10,1,main]
>> java.lang.AssertionError: Memory was freed
>>         at
>> org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
>>         at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
>>         at
>> org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
>>         at
>> org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
>>         at
>> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
>>         at
>> org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:602)
>>         at
>> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:947)
>>         at
>> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:910)
>>         at
>> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:819)
>>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getExpectedCompactedFileSize(ColumnFamilyStore.java:1088)
>>         at
>> org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:564)
>>         at
>> org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
>>         at
>> org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
>>         at
>> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>  INFO [FlushWriter:1] 2015-02-28 01:55:15,111 Memtable.java (line 398)
>> Completed flushing
>> /raid0/cassandra/data/draios/mounted_fs_by_agent1/draios-mounted_fs_by_agent1-jb-132895-Data.db
>> (2513856 bytes) for commitlog position
>> ReplayPosition(segmentId=1425088070445, position=2041)
>>
>> This happens with all column families, and they are not particularly big
>> if that matters.
>>
>> How can I reclaim the free space for which I expanded the cluster in the
>> first place?
>>
>> Thank you
>>
>
>

Re: Error on nodetool cleanup

Posted by Jeff Wehrwein <je...@refresh.io>.
We had the exact same problem, and found this bug:
https://issues.apache.org/jira/browse/CASSANDRA-8716.  It's fixed in 2.0.13
(unreleased), but we haven't found a workaround for the interim.  Please
share if you find one!

Thanks,
Jeff

On Fri, Feb 27, 2015 at 6:01 PM, Gianluca Borello <gi...@draios.com>
wrote:

> Hello,
>
> I have a cluster of four nodes running 2.0.12. I added one more node and
> then went on with the cleanup procedure on the other four nodes, but I get
> this error (the same error on each node):
>
>  INFO [CompactionExecutor:10] 2015-02-28 01:55:15,097
> CompactionManager.java (line 619) Cleaned up to
> /raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-tmp-jb-432-Data.db.
>  8,253,257 to 8,253,257 (~100% of original) bytes for 5 keys.  Time: 304ms.
>  INFO [CompactionExecutor:10] 2015-02-28 01:55:15,100
> CompactionManager.java (line 563) Cleaning up
> SSTableReader(path='/raid0/cassandra/data/draios/protobuf86400/draios-protobuf86400-jb-431-Data.db')
> ERROR [CompactionExecutor:10] 2015-02-28 01:55:15,102 CassandraDaemon.java
> (line 199) Exception in thread Thread[CompactionExecutor:10,1,main]
> java.lang.AssertionError: Memory was freed
>         at
> org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
>         at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
>         at
> org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
>         at
> org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
>         at
> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
>         at
> org.apache.cassandra.io.sstable.SSTableReader.getIndexScanPosition(SSTableReader.java:602)
>         at
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:947)
>         at
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:910)
>         at
> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:819)
>         at
> org.apache.cassandra.db.ColumnFamilyStore.getExpectedCompactedFileSize(ColumnFamilyStore.java:1088)
>         at
> org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:564)
>         at
> org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
>         at
> org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
>         at
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>  INFO [FlushWriter:1] 2015-02-28 01:55:15,111 Memtable.java (line 398)
> Completed flushing
> /raid0/cassandra/data/draios/mounted_fs_by_agent1/draios-mounted_fs_by_agent1-jb-132895-Data.db
> (2513856 bytes) for commitlog position
> ReplayPosition(segmentId=1425088070445, position=2041)
>
> This happens with all column families, and they are not particularly big
> if that matters.
>
> How can I reclaim the free space for which I expanded the cluster in the
> first place?
>
> Thank you
>