You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Yonik Seeley (JIRA)" <ji...@apache.org> on 2017/03/02 04:17:45 UTC

[jira] [Commented] (SOLR-10205) Evaluate and reduce BlockCache store failures

    [ https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15891600#comment-15891600 ] 

Yonik Seeley commented on SOLR-10205:
-------------------------------------

I also plan on trying out using a higher number of reserved blocks (like 2 or 3) instead of the current 1.  This helps because if 2 threads both try to cache blocks at the same time, one will grab the reserved block first, then the other will have to wait until an older entry is evicted from the map (caused by the fact that the first thread will insert a new entry).

> Evaluate and reduce BlockCache store failures
> ---------------------------------------------
>
>                 Key: SOLR-10205
>                 URL: https://issues.apache.org/jira/browse/SOLR-10205
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Yonik Seeley
>            Assignee: Yonik Seeley
>         Attachments: SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block (BlockCache.store call) can fail, making caching less effective.  We should evaluate the impact of this storage failure and potentially reduce the number of storage failures.
> The implementation reserves a single block of memory.  In store, a block of memory is allocated, and then a pointer is inserted into the underling map.  A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even under low load), one can fail.  This is made worse by the fact that concurrent maps typically tend to amortize the cost of eviction over many keys (i.e. the actual size of the map can grow beyond the configured maximum number of entries... both the older ConcurrentLinkedHashMap and newer Caffeine do this).  When this is the case, store() won't be able to find a free block of memory, even if there aren't any other concurrently operating stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org