You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/03/21 08:29:54 UTC

[GitHub] [hudi] liockency opened a new issue #5081: [SUPPORT] flink hudi produce parquet file size more than 128M

liockency opened a new issue #5081:
URL: https://github.com/apache/hudi/issues/5081


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   <img width="1152" alt="159225068-643ba170-d849-4d7e-96c5-02b8456b101c" src="https://user-images.githubusercontent.com/34239410/159226468-952920e2-a4fc-4c72-91a4-db023f321a08.png">
   
   
   i use the default value of `write.parquet.max.file.size` which is 128M , but i found that there are som file size is more than 128M.
   when we use `presto` to query this hudi table , the big size file while be splited into two file , and seems to cause the duplicate data.  How can i promise the file size is lower than  128M ?  Is this a BUG?
   
   
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.10
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] Guanpx commented on issue #5081: [SUPPORT] flink hudi produce parquet file size more than 128M

Posted by GitBox <gi...@apache.org>.
Guanpx commented on issue #5081:
URL: https://github.com/apache/hudi/issues/5081#issuecomment-1077261639


   set write.parquet.max.file.size = 100M, that is an approximate number


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org