You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Zheng Hu (JIRA)" <ji...@apache.org> on 2019/05/09 08:48:00 UTC
[jira] [Updated] (HBASE-22090) The HFileBlock#CacheableDeserializer
should pass ByteBuffAllocator to the newly created HFileBlock
[ https://issues.apache.org/jira/browse/HBASE-22090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zheng Hu updated HBASE-22090:
-----------------------------
Resolution: Fixed
Hadoop Flags: Reviewed
Status: Resolved (was: Patch Available)
> The HFileBlock#CacheableDeserializer should pass ByteBuffAllocator to the newly created HFileBlock
> --------------------------------------------------------------------------------------------------
>
> Key: HBASE-22090
> URL: https://issues.apache.org/jira/browse/HBASE-22090
> Project: HBase
> Issue Type: Sub-task
> Reporter: Zheng Hu
> Assignee: Zheng Hu
> Priority: Major
> Attachments: HBASE-22090.HBASE-21879.v01.patch, HBASE-22090.HBASE-21879.v02.patch, HBASE-22090.HBASE-21879.v03.patch
>
>
> In HBASE-22005, we have the following TODO in HFileBlock#CacheableDeserializer:
> {code}
> public static final class BlockDeserializer implements CacheableDeserializer<Cacheable> {
> private BlockDeserializer() {
> }
> @Override
> public HFileBlock deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
> throws IOException {
> // ....
> // TODO make the newly created HFileBlock use the off-heap allocator, Need change the
> // deserializer or change the deserialize interface.
> return new HFileBlock(newByteBuff, usesChecksum, memType, offset, nextBlockOnDiskSize, null,
> ByteBuffAllocator.HEAP);
> }
> {code}
> Should use the global ByteBuffAllocator here rather than HEAP allocator, as the TODO said, we need to adjust the interface of deserializer.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)