You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2016/05/18 16:39:13 UTC
[jira] [Updated] (CASSANDRA-11834) Don't compute expensive
MaxPurgeableTimestamp until we've verified there's an expired tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jonathan Ellis updated CASSANDRA-11834:
---------------------------------------
Attachment: 11834.txt
Trivial patch attached.
> Don't compute expensive MaxPurgeableTimestamp until we've verified there's an expired tombstone
> -----------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-11834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11834
> Project: Cassandra
> Issue Type: Bug
> Components: Compaction
> Reporter: Jonathan Ellis
> Fix For: 2.1.15
>
> Attachments: 11834.txt
>
>
> In LCR's get reduced, we currently do this:
> {code}
> if (t.timestamp() < getMaxPurgeableTimestamp() && t.data.isGcAble(controller.gcBefore))
> {code}
> Should call the expensive getMaxPurgeableTimestamp only after we have called the cheap isGcAble.
> Marking this as a bug since it can cause pathological performance problems (CASSANDRA-11831).
> Have verified that this is not a problem in 3.0 (CompactionIterator does the check in the correct order).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)