You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2009/01/19 22:49:00 UTC

[jira] Commented: (HBASE-1136) HashFunction inadvertently destroys some randomness

    [ https://issues.apache.org/jira/browse/HBASE-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12665245#action_12665245 ] 

Jonathan Ellis commented on HBASE-1136:
---------------------------------------

done: https://issues.apache.org/jira/browse/HADOOP-5079

> HashFunction inadvertently destroys some randomness
> ---------------------------------------------------
>
>                 Key: HBASE-1136
>                 URL: https://issues.apache.org/jira/browse/HBASE-1136
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Jonathan Ellis
>             Fix For: 0.20.0
>
>         Attachments: hash.patch
>
>
> the code
>       for (int i = 0, initval = 0; i < nbHash; i++) {
>         initval = result[i] = Math.abs(hashFunction.hash(b, initval) % maxValue);
>       }
> restricts initval for the next hash to the [0, maxValue) range of the hash indexes returned.  This is suboptimal, particularly for larger nbHash and smaller maxValue.  Instead, use:
>       for (int i = 0, initval = 0; i < nbHash; i++) {
>         initval = hashFunction.hash(b, initval);
>         result[i] = Math.abs(initval) % maxValue;
>       }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.