You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Alexey Kudinkin (Jira)" <ji...@apache.org> on 2022/06/16 01:10:00 UTC

[jira] [Updated] (HUDI-4261) OOM in bulk-insert when using "NONE" sort-mode for table w/ large # of partitino

     [ https://issues.apache.org/jira/browse/HUDI-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alexey Kudinkin updated HUDI-4261:
----------------------------------
    Description: 
While experimenting w/ bulk-inserting i've stumbled upon an OOM failure when you do bulk-insert w/ sort-mode "NONE" for the table w/ large number of partitions (> 1000).

 

This happens for the same reasons as HUDI-3883: every logical partition (let's say we have N of these, equal to shuffling-parallelism in Hudi) handled by Spark, (since no re-partitioning is done to align with the actual partition-column) will likely have a record from every physical partition on disk (let's say we have M of these). B/c of that every logical partition will be writing into every physical one.

This will eventually produce 
 # M * N files in the table
 # For every file in the table while writing Hudi will keep a "handle" in memory which in turn will hold full buffer worth of Parquet data (until flushed).

This ultimately leads to an OOM.

 

!Screen Shot 2022-06-15 at 6.06.06 PM.png!

  was:
While experimenting w/ bulk-inserting i've stumbled upon an OOM failure when you do bulk-insert w/ sort-mode "NONE" for the table w/ large number of partitions (> 1000).

 

This happens for the same reasons as HUDI-3883: every logical partition (let's say we have N of these, equal to shuffling-parallelism in Hudi) handled by Spark, (since no re-partitioning is done to align with the actual partition-column) will likely have a record from every physical partition on disk (let's say we have M of these). B/c of that every logical partition will be writing into every physical one.

This will eventually produce 
 # M * N files in the table
 # For every file in the table while writing Hudi will keep a "handle" in memory which in turn will hold full buffer worth of Parquet data (until flushed).

 

This ultimately leads to an OOM.


> OOM in bulk-insert when using "NONE" sort-mode for table w/ large # of partitino
> --------------------------------------------------------------------------------
>
>                 Key: HUDI-4261
>                 URL: https://issues.apache.org/jira/browse/HUDI-4261
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Alexey Kudinkin
>            Assignee: Alexey Kudinkin
>            Priority: Blocker
>             Fix For: 0.12.0
>
>         Attachments: Screen Shot 2022-06-15 at 6.06.06 PM.png
>
>
> While experimenting w/ bulk-inserting i've stumbled upon an OOM failure when you do bulk-insert w/ sort-mode "NONE" for the table w/ large number of partitions (> 1000).
>  
> This happens for the same reasons as HUDI-3883: every logical partition (let's say we have N of these, equal to shuffling-parallelism in Hudi) handled by Spark, (since no re-partitioning is done to align with the actual partition-column) will likely have a record from every physical partition on disk (let's say we have M of these). B/c of that every logical partition will be writing into every physical one.
> This will eventually produce 
>  # M * N files in the table
>  # For every file in the table while writing Hudi will keep a "handle" in memory which in turn will hold full buffer worth of Parquet data (until flushed).
> This ultimately leads to an OOM.
>  
> !Screen Shot 2022-06-15 at 6.06.06 PM.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)