You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "chenxu (Jira)" <ji...@apache.org> on 2019/12/06 03:32:00 UTC
[jira] [Commented] (HBASE-23374) ExclusiveMemHFileBlock’s allocator should not be hardcoded as ByteBuffAllocator.HEAP
[ https://issues.apache.org/jira/browse/HBASE-23374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16989364#comment-16989364 ]
chenxu commented on HBASE-23374:
--------------------------------
FYI [~anoopjohnson], [~ramkrishna.s.vasudevan@gmail.com], [~openinx],this can avoid allocate from heap when doing HFileBlock#cloneOnDiskBufferWithHeader or HFileBlock#cloneUncompressedBufferWithHeader, has a little improvement.
> ExclusiveMemHFileBlock’s allocator should not be hardcoded as ByteBuffAllocator.HEAP
> ------------------------------------------------------------------------------------
>
> Key: HBASE-23374
> URL: https://issues.apache.org/jira/browse/HBASE-23374
> Project: HBase
> Issue Type: Improvement
> Reporter: chenxu
> Assignee: chenxu
> Priority: Minor
>
> ExclusiveMemHFileBlock's constructor looks like this:
> {code:java}
> ExclusiveMemHFileBlock(BlockType blockType, int onDiskSizeWithoutHeader,
> int uncompressedSizeWithoutHeader, long prevBlockOffset, ByteBuff buf, boolean fillHeader,
> long offset, int nextBlockOnDiskSize, int onDiskDataSizeWithHeader,
> HFileContext fileContext) {
> super(blockType, onDiskSizeWithoutHeader, uncompressedSizeWithoutHeader, prevBlockOffset, buf,
> fillHeader, offset, nextBlockOnDiskSize, onDiskDataSizeWithHeader, fileContext,
> ByteBuffAllocator.HEAP);
> }
> {code}
> After HBASE-22802, ExclusiveMemHFileBlock’s data may be allocated through the BB pool, so it’s allocator should not be hard coded as ByteBuffAllocator.HEAP
--
This message was sent by Atlassian Jira
(v8.3.4#803005)