You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Vladimir Rodionov (JIRA)" <ji...@apache.org> on 2013/10/25 08:29:31 UTC

[jira] [Commented] (HBASE-9840) Large scans and BlockCache evictions problems

    [ https://issues.apache.org/jira/browse/HBASE-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805086#comment-13805086 ] 

Vladimir Rodionov commented on HBASE-9840:
------------------------------------------

What is the purpose of running large scan (larger than available young gen cache space) with cacheBlock enabled? 

I do have a couple comments on LruBlockCache design and implementation. I did not spend much time analyzing the code though:

1.  singleFactor = 0.25, multiFactor = 0.5 , memoryFactor = 0.25 - are all hard-coded. They must be configurable.
2. Default values are probably not optimal at all. If it is attempt to mimic LRU2Q cache than optimal split for first insert is closer to 0.75 (I think I read it on Facebook engineering)
3. Eviction does not follow LRU2Q at all. In LRU2Q all data is evicted from the tail of a common queue (say, single and multi bucket CAN NOT be processed separately but together). In LruBlockCache all 3 buckets evict their data independently.



> Large scans and BlockCache evictions problems
> ---------------------------------------------
>
>                 Key: HBASE-9840
>                 URL: https://issues.apache.org/jira/browse/HBASE-9840
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>
> I just ran into a scenario that baffled me first, but after some reflection makes sense. I ran a very large scan that filled up most of the block cache with my scan's data. I ran that scan a few times.
> That I ran a much smaller scan, and this scan will never get all its blocks cached if it does not fit entirely into the remaining BlockCache; regardless how I often I run it!
> The reason is that the blocks of the first large scan were all promoted. Since the 2nd scan did not fully fit into the cache all blocks are round-robin evicted as I rerun the scan. Thus those blocks will never get accessed more than once before they get evicted again.
> Since promoted blocks are not demoted the large scan's block will never be evicted unless we have another small enough scan/get that can promote its blocks.
> Not sure what the proper solution is, but it seems only a LRU cache that can expire blocks over time would solve this.
> Granted, this is a pretty special case.
> Edit: My usual spelling digressions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)