You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Peidian Li (Jira)" <ji...@apache.org> on 2022/06/07 06:17:00 UTC

[jira] [Created] (PARQUET-2152) zstd compressor and decompressor use the same configuration

Peidian Li created PARQUET-2152:
-----------------------------------

             Summary: zstd compressor and decompressor use the same configuration
                 Key: PARQUET-2152
                 URL: https://issues.apache.org/jira/browse/PARQUET-2152
             Project: Parquet
          Issue Type: Bug
          Components: parquet-mr
    Affects Versions: 1.12.2
            Reporter: Peidian Li


I use spark to rewrite the parquet files that are compressed by zstd. And the parquet version isĀ  1.12.2. I want to read the parquet files compressed by level 3 and compress them on another level. But the level can't be changed.
After I check the source, I found the problem was the codec was cached, and the configuration will not be updated:
[https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java#L144]

[https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java#L226]

I think the problem is important. I found it when I try to use a different level to compaction the files in the iceberg table. Asynchronous rewriting with a higher level can lead to higher compression ratio. This is important to save storage costs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)