You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2014/08/22 17:12:11 UTC
[jira] [Resolved] (CASSANDRA-7817) when entire row is deleted, the
records in the row seem to counted toward TombstoneOverwhelmingException
[ https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jonathan Ellis resolved CASSANDRA-7817.
---------------------------------------
Resolution: Not a Problem
> when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException
> --------------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-7817
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
> Project: Cassandra
> Issue Type: Bug
> Environment: Cassandra version 2.0.9
> Reporter: Digant Modha
> Priority: Minor
>
> I saw this behavior in development cluster, but was able to reproduce it in a single node setup. In development cluster I had more than 52,000 records and used default values for tombstone threshold.
> For testing purpose, I used lower numbers for thresholds:
> tombstone_warn_threshold: 100
> tombstone_failure_threshold: 1000
> Here are the steps:
> table:
> CREATE TABLE cstestcf_conflate_data (
> key ascii,
> datehr int,
> validfrom timestamp,
> asof timestamp,
> copied boolean,
> datacenter ascii,
> storename ascii,
> value blob,
> version ascii,
> PRIMARY KEY ((key, datehr), validfrom, asof)
> ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
> cqlsh:cstestks> select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' and datehr = 2014082119;
> count
> -------
> 470
> (1 rows)
> cqlsh:cstestks> delete from cstestcf_conflate_data WHERE KEY='BK_2' and datehr = 2014082119;
> cqlsh:cstestks> select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' and datehr = 2014082119;
> Request did not complete within rpc_timeout.
> Exception in system.log:
> java.lang.RuntimeException: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
> at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
> at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
> at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
> at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
> at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
> at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
> at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)
--
This message was sent by Atlassian JIRA
(v6.2#6252)