You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Nick Dimiduk (Jira)" <ji...@apache.org> on 2020/01/23 17:53:00 UTC

[jira] [Updated] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input

     [ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nick Dimiduk updated HBASE-21937:
---------------------------------
    Fix Version/s: 2.3.0
                   3.0.0

> Make the Compression#decompress can accept ByteBuff as input 
> -------------------------------------------------------------
>
>                 Key: HBASE-21937
>                 URL: https://issues.apache.org/jira/browse/HBASE-21937
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>             Fix For: 3.0.0, 2.3.0
>
>         Attachments: HBASE-21937.HBASE-21879.v1.patch, HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch
>
>
> When decompressing an  compressed block, we are also allocating HeapByteBuffer for the unpacked block.  should allocate ByteBuff from the global ByteBuffAllocator, skimmed the code,  the key point is, we need an  ByteBuff decompress interface, not the following: 
> {code}
> # Compression.java
>   public static void decompress(byte[] dest, int destOffset,
>       InputStream bufferedBoundedStream, int compressedSize,
>       int uncompressedSize, Compression.Algorithm compressAlgo)
>       throws IOException {
>       //...
> }
> {code}
> Not very high priority,  let me make the block without compression to be offheap firstly. 
> In HBASE-22005,  I ignored the unit test: 
> 1. TestLoadAndSwitchEncodeOnDisk ; 
> 2. TestHFileBlock#testPreviousOffset; 
> Need to resolve this issue and make those UT works fine. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)