You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Raymond Xu (Jira)" <ji...@apache.org> on 2022/09/01 01:04:00 UTC

[jira] [Commented] (HUDI-4753) More accurate evaluation of log record during log writing or compaction

    [ https://issues.apache.org/jira/browse/HUDI-4753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17598678#comment-17598678 ] 

Raymond Xu commented on HUDI-4753:
----------------------------------

For [~guoyihua] to triage and set the priority

> More accurate evaluation of log record during log writing or compaction
> -----------------------------------------------------------------------
>
>                 Key: HUDI-4753
>                 URL: https://issues.apache.org/jira/browse/HUDI-4753
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: metadata
>            Reporter: Yuwei Xiao
>            Assignee: Ethan Guo
>            Priority: Major
>             Fix For: 0.13.0
>
>
> In current log writing, the avgRecordSize is taken from the first incoming log record, which may not be accurate, especially in the metadata table case.
>  
> In metadata table writing, the first log record is always `_{_}all_partition{_}_`, whose size may be much larger than a normal partition record.
>  
> This issue will cause performance issue in log writing and compaction, as we write too many log blocks and spill unnecessary records to disk.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)