You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Gabriel Giussi <ga...@gmail.com> on 2019/08/12 16:12:38 UTC

How to delete huge partition in cassandra 3.0.13

I've found a huge partion (~9GB) in my cassandra cluster because I'm
loosing 3 nodes recurrently due to OutOfMemoryError

> ERROR [SharedPool-Worker-12] 2019-08-12 11:07:45,735
> JVMStabilityInspector.java:140 - JVM state determined to be unstable.
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_151]
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_151]
> at
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:373)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:267)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:193)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:109)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:97)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:301)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:138)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:134)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:321)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_151]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.0.13.jar:3.0.13]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> [apache-cassandra-3.0.13.jar:3.0.13]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.0.13.jar:3.0.13]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>

From the stacktrace I assume that some client is try to read that partition
(ReadResponse) so I may filter requests to this specific partition as a
quick solution but I think the compaction will never be able to remove this
partition (I already executed a DELETE).
What can I do to delete this partition? May I delete the sstable directly?
Or should I upgrade the node and give more heap to cassandra?

Thanks.

Re: How to delete huge partition in cassandra 3.0.13

Posted by Léo FERLIN SUTTON <lf...@mailjet.com.INVALID>.
So you have deleted the partition. Do not delete the sstables directly.

By default cassandra will keep the tombstones untouched for 10 days.
Once 10 days have passed (should be done now since your message was on
august 12) a compaction is needed to actually reclaim the data.

You could force a compaction manually but before advising you to do so,
could you tell us what compaction strategy you are using for this table ?

Regards,

Leo

On Mon, Aug 12, 2019 at 6:13 PM Gabriel Giussi <ga...@gmail.com>
wrote:

> I've found a huge partion (~9GB) in my cassandra cluster because I'm
> loosing 3 nodes recurrently due to OutOfMemoryError
>
>> ERROR [SharedPool-Worker-12] 2019-08-12 11:07:45,735
>> JVMStabilityInspector.java:140 - JVM state determined to be unstable.
>> Exiting forcefully due to:
>> java.lang.OutOfMemoryError: Java heap space
>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_151]
>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_151]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:373)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:267)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:193)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:109)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:97)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:301)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:138)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:134)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:321)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_151]
>> at
>> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>> [apache-cassandra-3.0.13.jar:3.0.13]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>> [apache-cassandra-3.0.13.jar:3.0.13]
>> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>>
>
> From the stacktrace I assume that some client is try to read that
> partition (ReadResponse) so I may filter requests to this specific
> partition as a quick solution but I think the compaction will never be able
> to remove this partition (I already executed a DELETE).
> What can I do to delete this partition? May I delete the sstable directly?
> Or should I upgrade the node and give more heap to cassandra?
>
> Thanks.
>

Re: How to delete huge partition in cassandra 3.0.13

Posted by Elliott Sims <el...@backblaze.com>.
It may also be worth upgrading to Cassandra 3.11.4.  There's some changes
in 3.6+ that significantly reduce heap pressure from very large partitions.

On Mon, Aug 12, 2019 at 9:13 AM Gabriel Giussi <ga...@gmail.com>
wrote:

> I've found a huge partion (~9GB) in my cassandra cluster because I'm
> loosing 3 nodes recurrently due to OutOfMemoryError
>
>> ERROR [SharedPool-Worker-12] 2019-08-12 11:07:45,735
>> JVMStabilityInspector.java:140 - JVM state determined to be unstable.
>> Exiting forcefully due to:
>> java.lang.OutOfMemoryError: Java heap space
>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.8.0_151]
>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_151]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:297)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:373)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:267)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:193)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:109)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:97)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:301)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:138)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:134)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:321)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_151]
>> at
>> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>> ~[apache-cassandra-3.0.13.jar:3.0.13]
>> at
>> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>> [apache-cassandra-3.0.13.jar:3.0.13]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>> [apache-cassandra-3.0.13.jar:3.0.13]
>> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
>>
>
> From the stacktrace I assume that some client is try to read that
> partition (ReadResponse) so I may filter requests to this specific
> partition as a quick solution but I think the compaction will never be able
> to remove this partition (I already executed a DELETE).
> What can I do to delete this partition? May I delete the sstable directly?
> Or should I upgrade the node and give more heap to cassandra?
>
> Thanks.
>