You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Anoop Sam John (JIRA)" <ji...@apache.org> on 2016/10/01 06:53:20 UTC

[jira] [Commented] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

    [ https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15538040#comment-15538040 ] 

Anoop Sam John commented on HBASE-16738:
----------------------------------------

Ya only 2.0 issue. In older versions, when we read from L2 cache, we will be copying the HFileblock data from shared memory into temp on heap bytes[]s.  So there are no issues.  HBASE-11425 changed this so as to serve cells directly from off heap L2 cache area (with out any copy)

> L1 cache caching shared memory HFile block when blocks promoted from L2 to L1
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-16738
>                 URL: https://issues.apache.org/jira/browse/HBASE-16738
>             Project: HBase
>          Issue Type: Sub-task
>          Components: regionserver, Scanners
>    Affects Versions: 2.0.0
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>             Fix For: 2.0.0
>
>         Attachments: HBASE-16738.patch
>
>
> This is an issue when L1 and L2 cache used with combinedMode = false.
> See in getBlock
> {code}
> if (victimHandler != null && !repeat) {
>         Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, updateCacheMetrics);
>         // Promote this to L1.
>         if (result != null && caching) {
>           cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = */ true);
>         }
>         return result;
>       }
> {code}
> When block is not there in L1 and have it in L2, we will return the block read from L2 and promote that block to L1 by adding it in LRUCache.  But if the Block buffer is having shared memory (Off heap bucket cache for eg:) , we can not directly cache this block. The buffer memory area under this block can get cleaned up at any time. So we may get block data corruption.
> In such a case, we need to do a deep copy of the block (Including its buffer) and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)