You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Benedict (JIRA)" <ji...@apache.org> on 2015/03/07 13:25:38 UTC

[jira] [Commented] (CASSANDRA-8413) Bloom filter false positive ratio is not honoured

    [ https://issues.apache.org/jira/browse/CASSANDRA-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14351543#comment-14351543 ] 

Benedict commented on CASSANDRA-8413:
-------------------------------------

It occurs to me that this may be more significant still for LCS, since we explicitly narrow the space over which we operate. Obviously it's a bounded problem for LCS, but still.... I think we should simply regularize the bits over the known min/max of the sstable we're writing.

> Bloom filter false positive ratio is not honoured
> -------------------------------------------------
>
>                 Key: CASSANDRA-8413
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8413
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Benedict
>            Assignee: Aleksey Yeschenko
>             Fix For: 2.1.4
>
>         Attachments: 8413.hack.txt
>
>
> Whilst thinking about CASSANDRA-7438 and hash bits, I realised we have a problem with sabotaging our bloom filters when using the murmur3 partitioner. I have performed a very quick test to confirm this risk is real.
> Since a typical cluster uses the same murmur3 hash for partitioning as we do for bloom filter lookups, and we own a contiguous range, we can guarantee that the top X bits collide for all keys on the node. This translates into poor bloom filter distribution. I quickly hacked LongBloomFilterTest to simulate the problem, and the result in these tests is _up to_ a doubling of the actual false positive ratio. The actual change will depend on the key distribution, the number of keys, the false positive ratio, the number of nodes, the token distribution, etc. But seems to be a real problem for non-vnode clusters of at least ~128 nodes in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)