You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Marcus Eriksson (JIRA)" <ji...@apache.org> on 2016/05/18 17:13:13 UTC

[jira] [Commented] (CASSANDRA-11834) Don't compute expensive MaxPurgeableTimestamp until we've verified there's an expired tombstone

    [ https://issues.apache.org/jira/browse/CASSANDRA-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289353#comment-15289353 ] 

Marcus Eriksson commented on CASSANDRA-11834:
---------------------------------------------

+1

> Don't compute expensive MaxPurgeableTimestamp until we've verified there's an expired tombstone
> -----------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11834
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11834
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Compaction
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>            Priority: Minor
>             Fix For: 2.1.15
>
>         Attachments: 11834.txt
>
>
> In LCR's get reduced, we currently do this:
> {code}
>                 if (t.timestamp() < getMaxPurgeableTimestamp() && t.data.isGcAble(controller.gcBefore))
> {code}
> Should call the expensive getMaxPurgeableTimestamp only after we have called the cheap isGcAble.
> Marking this as a bug since it can cause pathological performance problems (CASSANDRA-11831).
> Have verified that this is not a problem in 3.0 (CompactionIterator does the check in the correct order).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)