You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Raymond Xu (Jira)" <ji...@apache.org> on 2023/03/26 03:29:00 UTC
[jira] [Updated] (HUDI-5685) Fix performance gap in Bulk Insert row-writing path with enabled de-duplication
[ https://issues.apache.org/jira/browse/HUDI-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raymond Xu updated HUDI-5685:
-----------------------------
Sprint: Sprint 2023-01-31, Sprint 2023-02-14, Sprint 2023-02-28 (was: Sprint 2023-01-31, Sprint 2023-02-14)
> Fix performance gap in Bulk Insert row-writing path with enabled de-duplication
> -------------------------------------------------------------------------------
>
> Key: HUDI-5685
> URL: https://issues.apache.org/jira/browse/HUDI-5685
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Alexey Kudinkin
> Assignee: Alexey Kudinkin
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 0.14.0
>
>
> Currently, in case flag {{hoodie.combine.before.insert}} is set to true and {{hoodie.bulkinsert.sort.mode}} is set to {{{}NONE{}}}, Bulk Insert Row Writing performance will considerably degrade due to the following circumstances
> * During de-duplication (w/in {{{}dedupRows{}}}) records in the incoming RDD would be reshuffled (by Spark's default {{{}HashPartitioner{}}}) based on {{(partition-path, record-key)}} into N partitions
> * In case {{BulkInsertSortMode.NONE}} is used as partitioner, no re-partitioning will be performed and therefore each Spark task might be writing into M table partitions
> * This in turn entails explosion in the number of created (small) files, killing performance and table's layout
--
This message was sent by Atlassian Jira
(v8.20.10#820010)