You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Alexey Kudinkin (Jira)" <ji...@apache.org> on 2022/06/25 04:09:00 UTC
[jira] [Created] (HUDI-4321) Fix Hudi to not write in Parquet legacy format
Alexey Kudinkin created HUDI-4321:
-------------------------------------
Summary: Fix Hudi to not write in Parquet legacy format
Key: HUDI-4321
URL: https://issues.apache.org/jira/browse/HUDI-4321
Project: Apache Hudi
Issue Type: Bug
Reporter: Alexey Kudinkin
Currently Hudi have to write in Parquet legacy-format ("spark.sql.parquet.writeLegacyFormat") whenever schema contains Decimals, due to the fact that it relies on AvroParquetReader which is unable to read Decimals in the non-legacy format (ie it could only read Decimals encoded as FIXED_BYTE_ARRAY, and not as INT32/INT64)
This leads to suboptimal storage footprint where for example on some datasets this could lead to a bloat of 10% or more.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)