You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Alexey Kudinkin (Jira)" <ji...@apache.org> on 2022/06/16 01:05:00 UTC

[jira] [Created] (HUDI-4261) OOM in bulk-insert when using "NONE" sort-mode for table w/ large # of partitino

Alexey Kudinkin created HUDI-4261:
-------------------------------------

             Summary: OOM in bulk-insert when using "NONE" sort-mode for table w/ large # of partitino
                 Key: HUDI-4261
                 URL: https://issues.apache.org/jira/browse/HUDI-4261
             Project: Apache Hudi
          Issue Type: Bug
            Reporter: Alexey Kudinkin
            Assignee: Alexey Kudinkin
             Fix For: 0.12.0


While experimenting w/ bulk-inserting i've stumbled upon an OOM failure when you do bulk-insert w/ sort-mode "NONE" for the table w/ large number of partitions (> 1000).

 

This happens for the same reasons as HUDI-3883: every logical partition (let's say we have N of these, equal to shuffling-parallelism in Hudi) handled by Spark, (since no re-partitioning is done to align with the actual partition-column) will likely have a record from every physical partition on disk (let's say we have M of these). B/c of that every logical partition will be writing into every physical one.

This will eventually produce 
 # M * N files in the table
 # For every file in the table while writing Hudi will keep a "handle" in memory which in turn will hold full buffer worth of Parquet data (until flushed).

 

This ultimately leads to an OOM.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)