You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2021/10/19 18:32:55 UTC

[GitHub] [hbase] apurtell commented on a change in pull request #3730: HBASE-26316 Per-table or per-CF compression codec setting overrides

apurtell commented on a change in pull request #3730:
URL: https://github.com/apache/hbase/pull/3730#discussion_r732140320



##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
##########
@@ -87,8 +92,24 @@ public void prepareDecoding(int onDiskSizeWithoutHeader, int uncompressedSizeWit
 
       Compression.Algorithm compression = fileContext.getCompression();
       if (compression != Compression.Algorithm.NONE) {
-        Compression.decompress(blockBufferWithoutHeader, dataInputStream,
-          uncompressedSizeWithoutHeader, compression);
+        Decompressor decompressor = null;
+        try {
+          decompressor = compression.getDecompressor();
+          // Some algorithms don't return decompressors and accept null as a valid parameter for
+          // same when creating decompression streams. We can ignore these cases wrt reinit.
+          if (decompressor != null && decompressor instanceof CanReinit) {

Review comment:
       I didn't know that, cool.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@hbase.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org