You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2019/06/24 08:26:02 UTC

[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input

    [ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16870928#comment-16870928 ] 

Hudson commented on HBASE-21937:
--------------------------------

Results for branch master
	[build #1168 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1168/]: (x) *{color:red}-1 overall{color}*
----
details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make the Compression#decompress can accept ByteBuff as input 
> -------------------------------------------------------------
>
>                 Key: HBASE-21937
>                 URL: https://issues.apache.org/jira/browse/HBASE-21937
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>         Attachments: HBASE-21937.HBASE-21879.v1.patch, HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch
>
>
> When decompressing an  compressed block, we are also allocating HeapByteBuffer for the unpacked block.  should allocate ByteBuff from the global ByteBuffAllocator, skimmed the code,  the key point is, we need an  ByteBuff decompress interface, not the following: 
> {code}
> # Compression.java
>   public static void decompress(byte[] dest, int destOffset,
>       InputStream bufferedBoundedStream, int compressedSize,
>       int uncompressedSize, Compression.Algorithm compressAlgo)
>       throws IOException {
>       //...
> }
> {code}
> Not very high priority,  let me make the block without compression to be offheap firstly. 
> In HBASE-22005,  I ignored the unit test: 
> 1. TestLoadAndSwitchEncodeOnDisk ; 
> 2. TestHFileBlock#testPreviousOffset; 
> Need to resolve this issue and make those UT works fine. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)