You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Richard Yu (JIRA)" <ji...@apache.org> on 2019/03/12 03:10:00 UTC

[jira] [Comment Edited] (KAFKA-8020) Consider changing design of ThreadCache

    [ https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790173#comment-16790173 ] 

Richard Yu edited comment on KAFKA-8020 at 3/12/19 3:09 AM:
------------------------------------------------------------

Oh about implementing this policy for all caches. I'm not too sure about that. I was only planning on implementing this policy for ThreadCache, since I'm somewhat familiar with this part of Kafka Streams. 


was (Author: yohan123):
Oh about implementing this policy for all caches. I'm not too sure about that. I was only planning on implementing this policy for ThreadCache, since I'm somewhat familiar with this part of Kafka Streams. Other caches would've to wait I guess, since its out of the scope of this particular issue. (I think)

> Consider changing design of ThreadCache 
> ----------------------------------------
>
>                 Key: KAFKA-8020
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8020
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>            Reporter: Richard Yu
>            Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction policy better than traditional LRU models, having more cache hits than misses. In this new policy, if an item is stored beyond its useful lifespan, then it is removed. For example, in {{CachingWindowStore}}, a window usually is of limited size. After it expires, it would no longer be queried for, but it potentially could stay in the ThreadCache for an unnecessary amount of time if it is not evicted (i.e. the number of entries being inserted is few). For better allocation of memory, it would be better if we implement a time-aware LRU Cache which takes into account the lifespan of an entry and removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)