You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by dlu66061 <dl...@yahoo.com> on 2014/10/03 21:24:33 UTC

CQL query throws TombstoneOverwhelmingException against a LeveledCompactionStrategy table

I have two tables. Table “event” stores data with
SizeTieredCompactionStrategy, and table “event_index” acts as an index table
with LeveledCompationStrategy, and with TimeUUID as its clustering column.
For each record in event table, there will be 4 indices in the event_index
table, and the overall size of the 4 indices is about the same as each
record in event table. All columns have TTL set so the record and the
corresponding indices go away at the same time.

About a week ago, I load tested it for 3 days with millions of records. The
majority of the records have 7 day TTL, while a small amount of records have
30 day TTL. 

Now a simple CQL query like “select * from event_index limit 1” won’t run
and Cassandra log says 

ERROR [ReadStage:68] 2014-10-01 15:40:14,751 SliceQueryFilter.java (line
200) Scanned over 100000 tombstones in event_index; query aborted (see
tombstone_fail_threshold)
ERROR [ReadStage:68] 2014-10-01 15:40:14,753 CassandraDaemon.java (line 196)
Exception in thread Thread[ReadStage:68,5,main]
java.lang.RuntimeException:
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
        at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1900)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
        at java.lang.Thread.run(Unknown Source)

I can understand why there could be that many tombstones, but should they be
removed by compaction?

I went to the corresponding keyspace folder. To make sure I only count
actual table files, I delete everything under tablename/backups and
tablename/snapshots folders. Then in the keyspace folder, I ran “du” command
to estimate the file space usage.  

$ du -sk *
239872  event
248092  event_index

Then I ran “nodetool compact” against the keyspace, and ran “du” again

$ du -sk *
80048   event
248092  event_index

Well, while event get compacted and its size dropped to 1/3 of the original,
event_index did not budge. I remember from reading somewhere that “nodetool
compact” is a no-op for LeveledCompationStrategy tables, which explains why
event_index size did not change.

I then altered the table with gc_grace_seconds=600 and hoped it will speed
up the compaction. However, it has been two days and I don’t see any change.

What can I do at this point to get the event_index table compacted and get
those tombstones removed?

Another question, should I not use LeveledCompationStrategy for this type of
time series table?





--
View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/CQL-query-throws-TombstoneOverwhelmingException-against-a-LeveledCompactionStrategy-table-tp7597077.html
Sent from the cassandra-user@incubator.apache.org mailing list archive at Nabble.com.

Re: CQL query throws TombstoneOverwhelmingException against a LeveledCompactionStrategy table

Posted by dlu66061 <dl...@yahoo.com>.
BTW, I am using Cassandra 2.0.6.

Is this the same as  CASSANDRA-6654 (Droppable tombstones are not being
removed from LCS table despite being above 20%)
<https://issues.apache.org/jira/browse/CASSANDRA-6654>  ? I checked my table
in JConsole and the droppable tombstone ratio of over 60%.

If it is of the same cause, does that mean I should switch to
SizeTieredCompactionStrategy?



--
View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/CQL-query-throws-TombstoneOverwhelmingException-against-a-LeveledCompactionStrategy-table-tp7597077p7597091.html
Sent from the cassandra-user@incubator.apache.org mailing list archive at Nabble.com.