You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Nikolai Grigoriev (JIRA)" <ji...@apache.org> on 2013/12/26 18:03:50 UTC

[jira] [Created] (CASSANDRA-6528) ombstoneOverwhelmingException is thrown while populating data in recently truncated CF

Nikolai Grigoriev created CASSANDRA-6528:
--------------------------------------------

             Summary: ombstoneOverwhelmingException is thrown while populating data in recently truncated CF
                 Key: CASSANDRA-6528
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
             Project: Cassandra
          Issue Type: Bug
          Components: Core
         Environment: Cassadra 2.0.3, Linux, 6 nodes
            Reporter: Nikolai Grigoriev
            Priority: Minor


I am running some performance tests and recently I had to flush the data from one of the tables and repopulate it. I have about 30M rows with a few columns in each, about 5kb per row in in total. In order to repopulate the data I do "truncate <table>" from CQLSH and then relaunch the test. The test simply inserts the data in the table, does not read anything. Shortly after restarting the data generator I see this on one of the nodes:

{code}
 INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java (line 323) Started hinted handoff f
or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 200) Scanned ove
r 100000 tombstones; query aborted (see tombstone_fail_threshold)
ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 187) Exception in thread Thread[HintedHandoff:655,1,main]
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
        at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
        at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
        at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
        at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
        at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
        at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
        at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
        at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
        at org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
        at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
 INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) flushing high-traffic column family CFS(Keyspace='test_jmeter', ColumnFamily='test_profiles') (estimated 192717267 bytes)
{code}

I am inserting the data with CL=1.

It seems to be happening every time I do it. But I do not see any errors on the client side and the node seems to continue operating, this is why I think it is not a major issue. Maybe not an issue at all, but the message is logged as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)