You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Shawn Heisey (JIRA)" <ji...@apache.org> on 2015/12/23 07:36:46 UTC

[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

    [ https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15069229#comment-15069229 ] 

Shawn Heisey commented on SOLR-8241:
------------------------------------

ARC was a cache type that I had read about when I went looking for something better than LRU.  If I had known the idea was patented, I never would have created an issue for it and would have went straight to LFU.

If I ever find some time, I will work on SOLR-3393.  I haven't looked at how W-TinyLfu works or whether it would be a good alternative.  I think there are few things to consider:  How the speed compares to the code I cobbled together on SOLR-3393, how difficult it is to incorporate/debug, and whether any significant library dependencies are added.  It looks like you've used the Apache License, so there's no conflicts there.


> Evaluate W-TinyLfu cache
> ------------------------
>
>                 Key: SOLR-8241
>                 URL: https://issues.apache.org/jira/browse/SOLR-8241
>             Project: Solr
>          Issue Type: Wish
>          Components: search
>            Reporter: Ben Manes
>            Priority: Minor
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). The discussions seem to indicate that the higher hit rate (vs LRU) is offset by the slower performance of the implementation. An original goal appeared to be to introduce ARC, a patented algorithm that uses ghost entries to retain history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It uses a frequency sketch to compactly estimate an entry's popularity. It uses LRU to capture recency and operate in O(1) time. When using available academic traces the policy provides a near optimal hit rate regardless of the workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a dependency on. But, the code is fairly straightforward and a port into Solr's caches instead is a pragmatic alternative. More interesting is what the impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org