You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Ramzi Rabah (JIRA)" <ji...@apache.org> on 2009/10/21 21:30:59 UTC

[jira] Created: (CASSANDRA-507) Tombstone records in Cassandra are not being deleted

Tombstone records in Cassandra are not being deleted
----------------------------------------------------

                 Key: CASSANDRA-507
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-507
             Project: Cassandra
          Issue Type: Bug
          Components: Core
    Affects Versions: 0.4
            Reporter: Ramzi Rabah


I am running into problems with get_key_range.
My command is client.get_key_range("Keyspace1", "DatastoreDeletionSchedule",
                   "", "", 25, ConsistencyLevel.ONE);

After a lot of deletes on the datastore, I am getting 

ERROR [pool-1-thread-36] 2009-10-19 17:24:28,223 Cassandra.java (line
770) Internal error processing get_key_range
java.lang.RuntimeException: java.util.concurrent.TimeoutException:
Operation timed out.
       at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:560)
       at org.apache.cassandra.service.CassandraServer.get_key_range(CassandraServer.java:595)
       at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(Cassandra.java:766)
       at org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:609)
       at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
       at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
       at java.lang.Thread.run(Thread.java:619)
Caused by: java.util.concurrent.TimeoutException: Operation timed out.
       at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:97)
       at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:556)
       ... 7 more

Turns out that the compaction code removes tombstones, and it runs whenever you have
enough sstable fragments. As an optimization, if there is
only one version of a row it will just copy it to the new sstable.
This means it won't clean out tombstones, which is causing this problem.




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (CASSANDRA-507) Tombstone records in Cassandra are not being deleted

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-507.
--------------------------------------

       Resolution: Fixed
    Fix Version/s: 0.5
         Assignee: Jonathan Ellis

committed with test

> Tombstone records in Cassandra are not being deleted
> ----------------------------------------------------
>
>                 Key: CASSANDRA-507
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-507
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.4
>            Reporter: Ramzi Rabah
>            Assignee: Jonathan Ellis
>             Fix For: 0.5
>
>         Attachments: 507.patch
>
>
> I am running into problems with get_key_range.
> My command is client.get_key_range("Keyspace1", "DatastoreDeletionSchedule",
>                    "", "", 25, ConsistencyLevel.ONE);
> After a lot of deletes on the datastore, I am getting 
> ERROR [pool-1-thread-36] 2009-10-19 17:24:28,223 Cassandra.java (line
> 770) Internal error processing get_key_range
> java.lang.RuntimeException: java.util.concurrent.TimeoutException:
> Operation timed out.
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:560)
>        at org.apache.cassandra.service.CassandraServer.get_key_range(CassandraServer.java:595)
>        at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(Cassandra.java:766)
>        at org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:609)
>        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
>        at java.lang.Thread.run(Thread.java:619)
> Caused by: java.util.concurrent.TimeoutException: Operation timed out.
>        at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:97)
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:556)
>        ... 7 more
> Turns out that the compaction code removes tombstones, and it runs whenever you have
> enough sstable fragments. As an optimization, if there is
> only one version of a row it will just copy it to the new sstable.
> This means it won't clean out tombstones, which is causing this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (CASSANDRA-507) Tombstone records in Cassandra are not being deleted

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12769205#action_12769205 ] 

Hudson commented on CASSANDRA-507:
----------------------------------

Integrated in Cassandra #236 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/236/])
    all rows go through deserialize/removeDeleted so we can GC tombstones.
patch by jbellis; reviewed by junrao for 


> Tombstone records in Cassandra are not being deleted
> ----------------------------------------------------
>
>                 Key: CASSANDRA-507
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-507
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.4
>            Reporter: Ramzi Rabah
>            Assignee: Jonathan Ellis
>             Fix For: 0.5
>
>         Attachments: 507.patch
>
>
> I am running into problems with get_key_range.
> My command is client.get_key_range("Keyspace1", "DatastoreDeletionSchedule",
>                    "", "", 25, ConsistencyLevel.ONE);
> After a lot of deletes on the datastore, I am getting 
> ERROR [pool-1-thread-36] 2009-10-19 17:24:28,223 Cassandra.java (line
> 770) Internal error processing get_key_range
> java.lang.RuntimeException: java.util.concurrent.TimeoutException:
> Operation timed out.
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:560)
>        at org.apache.cassandra.service.CassandraServer.get_key_range(CassandraServer.java:595)
>        at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(Cassandra.java:766)
>        at org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:609)
>        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
>        at java.lang.Thread.run(Thread.java:619)
> Caused by: java.util.concurrent.TimeoutException: Operation timed out.
>        at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:97)
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:556)
>        ... 7 more
> Turns out that the compaction code removes tombstones, and it runs whenever you have
> enough sstable fragments. As an optimization, if there is
> only one version of a row it will just copy it to the new sstable.
> This means it won't clean out tombstones, which is causing this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (CASSANDRA-507) Tombstone records in Cassandra are not being deleted

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/CASSANDRA-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12768552#action_12768552 ] 

Jun Rao commented on CASSANDRA-507:
-----------------------------------

patch looks good to me. Can you add a test case for this?

> Tombstone records in Cassandra are not being deleted
> ----------------------------------------------------
>
>                 Key: CASSANDRA-507
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-507
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.4
>            Reporter: Ramzi Rabah
>         Attachments: 507.patch
>
>
> I am running into problems with get_key_range.
> My command is client.get_key_range("Keyspace1", "DatastoreDeletionSchedule",
>                    "", "", 25, ConsistencyLevel.ONE);
> After a lot of deletes on the datastore, I am getting 
> ERROR [pool-1-thread-36] 2009-10-19 17:24:28,223 Cassandra.java (line
> 770) Internal error processing get_key_range
> java.lang.RuntimeException: java.util.concurrent.TimeoutException:
> Operation timed out.
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:560)
>        at org.apache.cassandra.service.CassandraServer.get_key_range(CassandraServer.java:595)
>        at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(Cassandra.java:766)
>        at org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:609)
>        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
>        at java.lang.Thread.run(Thread.java:619)
> Caused by: java.util.concurrent.TimeoutException: Operation timed out.
>        at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:97)
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:556)
>        ... 7 more
> Turns out that the compaction code removes tombstones, and it runs whenever you have
> enough sstable fragments. As an optimization, if there is
> only one version of a row it will just copy it to the new sstable.
> This means it won't clean out tombstones, which is causing this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (CASSANDRA-507) Tombstone records in Cassandra are not being deleted

Posted by "Jonathan Ellis (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/CASSANDRA-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis updated CASSANDRA-507:
-------------------------------------

    Attachment: 507.patch

this patch fixes the bug in trunk.

unfortunately the risk/benefit of backporting this to the 0.4 branch is past my threshold of comfort.

> Tombstone records in Cassandra are not being deleted
> ----------------------------------------------------
>
>                 Key: CASSANDRA-507
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-507
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.4
>            Reporter: Ramzi Rabah
>         Attachments: 507.patch
>
>
> I am running into problems with get_key_range.
> My command is client.get_key_range("Keyspace1", "DatastoreDeletionSchedule",
>                    "", "", 25, ConsistencyLevel.ONE);
> After a lot of deletes on the datastore, I am getting 
> ERROR [pool-1-thread-36] 2009-10-19 17:24:28,223 Cassandra.java (line
> 770) Internal error processing get_key_range
> java.lang.RuntimeException: java.util.concurrent.TimeoutException:
> Operation timed out.
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:560)
>        at org.apache.cassandra.service.CassandraServer.get_key_range(CassandraServer.java:595)
>        at org.apache.cassandra.service.Cassandra$Processor$get_key_range.process(Cassandra.java:766)
>        at org.apache.cassandra.service.Cassandra$Processor.process(Cassandra.java:609)
>        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
>        at java.lang.Thread.run(Thread.java:619)
> Caused by: java.util.concurrent.TimeoutException: Operation timed out.
>        at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:97)
>        at org.apache.cassandra.service.StorageProxy.getKeyRange(StorageProxy.java:556)
>        ... 7 more
> Turns out that the compaction code removes tombstones, and it runs whenever you have
> enough sstable fragments. As an optimization, if there is
> only one version of a row it will just copy it to the new sstable.
> This means it won't clean out tombstones, which is causing this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.