You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Micah Kornfield (Jira)" <ji...@apache.org> on 2022/05/10 04:14:00 UTC
[jira] [Commented] (PARQUET-2122) Adding Bloom filter to small Parquet file bloats in size X1700
[ https://issues.apache.org/jira/browse/PARQUET-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17534123#comment-17534123 ]
Micah Kornfield commented on PARQUET-2122:
------------------------------------------
I believe the answer is the Bloom filter implementation isn't adaptive, so it simply preallocates all the bytes necessary. It would certainly be a nice option to have more adaptive data structures that can scale down for smaller files but is probably a decent amount of work to build consensus around this.
> Adding Bloom filter to small Parquet file bloats in size X1700
> --------------------------------------------------------------
>
> Key: PARQUET-2122
> URL: https://issues.apache.org/jira/browse/PARQUET-2122
> Project: Parquet
> Issue Type: Bug
> Components: parquet-cli, parquet-mr
> Affects Versions: 1.13.0
> Reporter: Ze'ev Maor
> Priority: Critical
> Attachments: data.csv, data_index_bloom.parquet
>
>
> Converting a small, 14 rows/1 string column csv file to Parquet without bloom filter yields a 600B file, adding '.withBloomFilterEnabled(true)' to ParquetWriter then yields a 1049197B file.
> It isn't clear what the extra space is used by.
> Attached csv and bloated Parquet files.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)