You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2014/01/22 08:03:19 UTC

[jira] [Comment Edited] (CASSANDRA-6609) Reduce Bloom Filter Garbage Allocation

    [ https://issues.apache.org/jira/browse/CASSANDRA-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13878341#comment-13878341 ] 

Jonathan Ellis edited comment on CASSANDRA-6609 at 1/22/14 7:01 AM:
--------------------------------------------------------------------

bq. patch that reduces garbage by a factor of 6

is that total garbage on the read path?

if so that sounds like a reasonable trade to me even if it doesn't optimize further.  patch looks fine to me.


was (Author: jbellis):
bq. patch that reduces garbage by a factor of 6

is that total garbage on the read path?

> Reduce Bloom Filter Garbage Allocation
> --------------------------------------
>
>                 Key: CASSANDRA-6609
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6609
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>         Attachments: tmp.diff
>
>
> Just spotted that we allocate potentially large amounts of garbage on bloom filter lookups, since we allocate a new long[] for each hash() and to store the bucket indexes we visit, in a manner that guarantees they are allocated on heap. With a lot of sstables and many requests, this could easily be hundreds of megabytes of young gen churn per second.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)