You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Alexey Kudinkin (Jira)" <ji...@apache.org> on 2022/03/28 17:37:00 UTC

[jira] [Resolved] (HUDI-3709) Parquet Writer does not respect Parquet Max File Size setting

     [ https://issues.apache.org/jira/browse/HUDI-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alexey Kudinkin resolved HUDI-3709.
-----------------------------------

> Parquet Writer does not respect Parquet Max File Size setting
> -------------------------------------------------------------
>
>                 Key: HUDI-3709
>                 URL: https://issues.apache.org/jira/browse/HUDI-3709
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Alexey Kudinkin
>            Assignee: Alexey Kudinkin
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 0.11.0
>
>
> Currently writing t/h Spark DataSource connector, does not respect "
> "hoodie.parquet.max.file.size" setting: in the snippet pasted below i'm trying to limit the file-size to 16Mb, while on disk i'm getting ~80Mb files.
>  
> The reason for that seem to be that we rely on ParquetWriter to control the file size (`canWrite` method), that relies in turn on FileSystem to trace how much was actually written to FS.
>  
> The problem with this approach is that Spark is writing {*}lazily{*}:
> It creates instances of ParquetWriter which in turn cache the whole record group when `write` methods are invoked and only flushes the data to FS only _when closing_ the Writer (ie when `close`) is invoked



--
This message was sent by Atlassian Jira
(v8.20.1#820001)