You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Viraj Jasani (Jira)" <ji...@apache.org> on 2021/06/21 12:50:00 UTC

[jira] [Work started] (HBASE-26018) Perf improvement in L1 cache

     [ https://issues.apache.org/jira/browse/HBASE-26018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Work on HBASE-26018 started by Viraj Jasani.
--------------------------------------------
> Perf improvement in L1 cache
> ----------------------------
>
>                 Key: HBASE-26018
>                 URL: https://issues.apache.org/jira/browse/HBASE-26018
>             Project: HBase
>          Issue Type: Improvement
>    Affects Versions: 3.0.0-alpha-1, 2.3.5, 2.4.4
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>             Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>         Attachments: computeIfPresent.png
>
>
> After HBASE-25698 is in, all L1 caching strategies perform buffer.retain() in order to maintain refCount atomically while retrieving cached blocks (CHM#computeIfPresent). Retaining refCount is coming up bit expensive in terms of performance. Using computeIfPresent API, CHM uses coarse grained segment locking and even if our computation is not so complex (we just call block retain API), it will block other update APIs for the key. computeIfPresent keeps showing up on flame graphs as well (attached one of them). Specifically when we see aggressive cache hits for meta blocks (with major blocks in cache), we would want to get away from coarse grained locking.
> One of the suggestions came up while reviewing PR#3215 is to treat cache read API as optimistic read and deal with block retain based refCount issues by catching the respective Exception and let it treat as cache miss. This should allow us to go ahead with lockless get API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)