You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jan Hentschel (Jira)" <ji...@apache.org> on 2019/12/25 20:45:00 UTC
[jira] [Resolved] (HBASE-23374) ExclusiveMemHFileBlock’s allocator should not be hardcoded as ByteBuffAllocator.HEAP
[ https://issues.apache.org/jira/browse/HBASE-23374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jan Hentschel resolved HBASE-23374.
-----------------------------------
Hadoop Flags: Reviewed
Resolution: Fixed
[~javaman_chen] Thanks for the contribution. Committed to master and branch-2.
> ExclusiveMemHFileBlock’s allocator should not be hardcoded as ByteBuffAllocator.HEAP
> ------------------------------------------------------------------------------------
>
> Key: HBASE-23374
> URL: https://issues.apache.org/jira/browse/HBASE-23374
> Project: HBase
> Issue Type: Improvement
> Reporter: chenxu
> Assignee: chenxu
> Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>
> ExclusiveMemHFileBlock's constructor looks like this:
> {code:java}
> ExclusiveMemHFileBlock(BlockType blockType, int onDiskSizeWithoutHeader,
> int uncompressedSizeWithoutHeader, long prevBlockOffset, ByteBuff buf, boolean fillHeader,
> long offset, int nextBlockOnDiskSize, int onDiskDataSizeWithHeader,
> HFileContext fileContext) {
> super(blockType, onDiskSizeWithoutHeader, uncompressedSizeWithoutHeader, prevBlockOffset, buf,
> fillHeader, offset, nextBlockOnDiskSize, onDiskDataSizeWithHeader, fileContext,
> ByteBuffAllocator.HEAP);
> }
> {code}
> After HBASE-22802, ExclusiveMemHFileBlock’s data may be allocated through the BB pool, so it’s allocator should not be hard coded as ByteBuffAllocator.HEAP
--
This message was sent by Atlassian Jira
(v8.3.4#803005)