You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2016/09/23 22:56:20 UTC

[jira] [Updated] (KAFKA-4168) More precise accounting of memory usage

     [ https://issues.apache.org/jira/browse/KAFKA-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Guozhang Wang updated KAFKA-4168:
---------------------------------
    Fix Version/s:     (was: 0.10.1.0)
                   0.10.2.0

> More precise accounting of memory usage
> ---------------------------------------
>
>                 Key: KAFKA-4168
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4168
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: streams
>    Affects Versions: 0.10.1.0
>            Reporter: Eno Thereska
>             Fix For: 0.10.2.0
>
>
> Right now, the cache.max.bytes.buffering parameter controls the size of the cache used. Specifically the size includes the size of the values stored in the cache plus basic overheads, such as key size, all the LRU entry sizes, etc. However, we could be more fine-grained in the memory accounting and add up the size of hash sets, hash maps and their entries more precisely. For example, currently a dirty entry is placed into a dirty keys set, but we do not account for the size of that set in the memory consumption calculation.
> It is likely this falls under "memory management" rather than "buffer cache management".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)