You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by GitBox <gi...@apache.org> on 2022/08/21 18:27:41 UTC

[GitHub] [parquet-mr] shangxinli commented on a diff in pull request #982: PARQUET-2160: Close ZstdInputStream to free off-heap memory in time.

shangxinli commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r950883464


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##########
@@ -109,7 +110,17 @@ public BytesInput decompress(BytesInput bytes, int uncompressedSize) throws IOEx
           decompressor.reset();
         }
         InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
-        decompressed = BytesInput.from(is, uncompressedSize);
+
+        // We need to explicitly close the ZstdDecompressorStream here to release the resources it holds to avoid
+        // off-heap memory fragmentation issue, see https://issues.apache.org/jira/browse/PARQUET-2160.
+        // This change will load the decompressor stream into heap a little earlier, since the problem it solves
+        // only happens in the ZSTD codec, so this modification is only made for ZSTD streams.
+        if (codec instanceof ZstandardCodec) {
+          decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));

Review Comment:
   I understand we had the discussion in the Jira that ByteInput.copy() just loads into a heap in advance but not add extra overall. Can we have a benchmark on the heap/GC(Heap size, GC time etc). I just want to make sure we fix one problem while introducing another problem.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@parquet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org