You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Machiel Groeneveld (JIRA)" <ji...@apache.org> on 2014/02/09 11:17:19 UTC
[jira] [Issue Comment Deleted] (CASSANDRA-6674)
TombstoneOverwhelmingException during/after batch insert
[ https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Machiel Groeneveld updated CASSANDRA-6674:
------------------------------------------
Comment: was deleted
(was: Is there a way to make the tombstones go away, can I force a cleanup for instance?)
> TombstoneOverwhelmingException during/after batch insert
> --------------------------------------------------------
>
> Key: CASSANDRA-6674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
> Project: Cassandra
> Issue Type: Bug
> Environment: 2.0.4; 2.0.5
> Mac OS X
> Reporter: Machiel Groeneveld
> Priority: Critical
>
> Select query on a table where I'm doing insert fails with tombstone exception. The database is clean/empty before doing inserts, doing the first query after a few thousand records inserted. I don't understand where the tombstones are coming from as I'm not doing any deletes.
> ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) Scanned over 100000 tombstones in visits.visits; query aborted (see tombstone_fail_threshold)
> ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) Exception in thread Thread[ReadStage:41,5,main]
> java.lang.RuntimeException: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
> at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
> at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
> at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
> at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
> at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
> at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
> at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
> at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
> at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
> at org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
> at org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
> at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
> ... 3 more
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)