You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2021/10/19 18:31:30 UTC

[GitHub] [hbase] virajjasani commented on a change in pull request #3730: HBASE-26316 Per-table or per-CF compression codec setting overrides

virajjasani commented on a change in pull request #3730:
URL: https://github.com/apache/hbase/pull/3730#discussion_r732126323



##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
##########
@@ -68,21 +70,24 @@
 
   /**
    * Create a buffer which will be encoded using dataBlockEncoder.
+   * @param conf store configuration
    * @param dataBlockEncoder Algorithm used for compression.
    * @param encoding encoding type used
-   * @param rawKVs
-   * @param meta
+   * @param rawKVs raw KVs
+   * @param meta hfile context
+   * @param conf store configuration

Review comment:
       nit: looks like duplicate `@param`, it's already covered above

##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/AbstractDataBlockEncoder.java
##########
@@ -29,14 +31,14 @@
 public abstract class AbstractDataBlockEncoder implements DataBlockEncoder {
 
   @Override
-  public HFileBlockEncodingContext newDataBlockEncodingContext(
+  public HFileBlockEncodingContext newDataBlockEncodingContext(Configuration conf,
       DataBlockEncoding encoding, byte[] header, HFileContext meta) {
-    return new HFileBlockDefaultEncodingContext(encoding, header, meta);
+    return new HFileBlockDefaultEncodingContext(conf, encoding, header, meta);
   }
 
   @Override
-  public HFileBlockDecodingContext newDataBlockDecodingContext(HFileContext meta) {
-    return new HFileBlockDefaultDecodingContext(meta);
+  public HFileBlockDecodingContext newDataBlockDecodingContext(Configuration conf, HFileContext meta) {

Review comment:
       nit: checkstyle line-length might complain

##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
##########
@@ -87,8 +92,24 @@ public void prepareDecoding(int onDiskSizeWithoutHeader, int uncompressedSizeWit
 
       Compression.Algorithm compression = fileContext.getCompression();
       if (compression != Compression.Algorithm.NONE) {
-        Compression.decompress(blockBufferWithoutHeader, dataInputStream,
-          uncompressedSizeWithoutHeader, compression);
+        Decompressor decompressor = null;
+        try {
+          decompressor = compression.getDecompressor();
+          // Some algorithms don't return decompressors and accept null as a valid parameter for
+          // same when creating decompression streams. We can ignore these cases wrt reinit.
+          if (decompressor != null && decompressor instanceof CanReinit) {

Review comment:
       Simplify to `decompressor instanceof CanReinit` as it covers null check also?

##########
File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
##########
@@ -87,8 +92,24 @@ public void prepareDecoding(int onDiskSizeWithoutHeader, int uncompressedSizeWit
 
       Compression.Algorithm compression = fileContext.getCompression();
       if (compression != Compression.Algorithm.NONE) {
-        Compression.decompress(blockBufferWithoutHeader, dataInputStream,
-          uncompressedSizeWithoutHeader, compression);
+        Decompressor decompressor = null;
+        try {
+          decompressor = compression.getDecompressor();
+          // Some algorithms don't return decompressors and accept null as a valid parameter for
+          // same when creating decompression streams. We can ignore these cases wrt reinit.
+          if (decompressor != null && decompressor instanceof CanReinit) {
+            ((CanReinit)decompressor).reinit(conf);
+          }
+          try (InputStream is =
+              compression.createDecompressionStream(dataInputStream, decompressor, 0)) {
+            BlockIOUtils.readFullyWithHeapBuffer(is, blockBufferWithoutHeader,
+              uncompressedSizeWithoutHeader);
+          }
+        } finally {
+          if (decompressor != null) {
+            compression.returnDecompressor(decompressor);
+          }
+        }

Review comment:
       nit: shall we remove Compression#decompress method? After this change, perhaps it is not being used anywhere else, nothing urgent, can be follow up too if it creates complexities.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@hbase.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org