You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Vinay (Jira)" <ji...@apache.org> on 2021/06/14 12:32:00 UTC
[jira] [Commented] (HUDI-2003) Auto Compute Compression
[ https://issues.apache.org/jira/browse/HUDI-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362925#comment-17362925 ]
Vinay commented on HUDI-2003:
-----------------------------
[~nishith29] Please do update the description if I have missed anything here
> Auto Compute Compression
> ------------------------
>
> Key: HUDI-2003
> URL: https://issues.apache.org/jira/browse/HUDI-2003
> Project: Apache Hudi
> Issue Type: Bug
> Components: Writer Core
> Reporter: Vinay
> Priority: Major
>
> Context :
> Submitted a spark job to read 3-4B ORC records and wrote to Hudi format. Creating the following table with all the runs that I had carried out based on different options
>
> ||CONFIG ||Number of Files Created||Size of each file||
> |PARQUET_FILE_MAX_BYTES=DEFAULT|30K|21MB|
> |PARQUET_FILE_MAX_BYTES=1GB|3700|178MB|
> |PARQUET_FILE_MAX_BYTES=1GB
> COPY_ON_WRITE_TABLE_INSERT_SPLIT_SIZE=1100000|Same as before|Same as before|
> |PARQUET_FILE_MAX_BYTES=1GB
> BULKINSERT_PARALLELISM=100|Same as before|Same as before|
> |PARQUET_FILE_MAX_BYTES=4GB|1600|675MB|
> |PARQUET_FILE_MAX_BYTES=6GB|669|1012MB|
> Based on this runs, it feels that the compression ratio is off.
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)