You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jay Kreps (JIRA)" <ji...@apache.org> on 2013/03/05 00:57:12 UTC

[jira] [Resolved] (KAFKA-741) Improve log cleaning dedupe buffer efficiency

     [ https://issues.apache.org/jira/browse/KAFKA-741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jay Kreps resolved KAFKA-741.
-----------------------------

    Resolution: Duplicate

This issue is fixed in the patch for KAFKA-739. It removes the duplication using a probing scheme and counts updates correctly.
                
> Improve log cleaning dedupe buffer efficiency
> ---------------------------------------------
>
>                 Key: KAFKA-741
>                 URL: https://issues.apache.org/jira/browse/KAFKA-741
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Jay Kreps
>            Assignee: Jay Kreps
>             Fix For: 0.8.1
>
>
> Two good suggestions:
> 1. Use a probing scheme to increase density without increasing the collision rate
> 2. Only count unique updates to the offset map (i.e. if the key is all zero, don't count it) when computing the load. Dynamically choose the end offset based on when the map is full.
> Would be good to investigate these things.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira