You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "konwu (Jira)" <ji...@apache.org> on 2022/03/31 07:43:00 UTC

[jira] [Updated] (HUDI-3758) Optimize flink partition table with BucketIndex

     [ https://issues.apache.org/jira/browse/HUDI-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

konwu updated HUDI-3758:
------------------------
    Description: 
When using flink bucket index , I meet two problems
 * without use all streamWriter tasks when partition table with small Bucket number
 * crashed with the following step

 # start job
 # killed before first commit success ( left some log files)
 # restart job run nomal after one successful commit
 # kill job and restart  throws `Duplicate fileID`

  was:
When using flink bucket index , I meet two problems
 * without use all streamWriter tasks when partition table with small Bucket number
 * crashed with the following step

 # start job
 # killed before first commit success ( left some log files)
 # restart job run nomal
 # kill job and restart  throws `Duplicate fileID`


> Optimize flink partition table with BucketIndex
> -----------------------------------------------
>
>                 Key: HUDI-3758
>                 URL: https://issues.apache.org/jira/browse/HUDI-3758
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: flink
>            Reporter: konwu
>            Priority: Major
>             Fix For: 0.11.0
>
>
> When using flink bucket index , I meet two problems
>  * without use all streamWriter tasks when partition table with small Bucket number
>  * crashed with the following step
>  # start job
>  # killed before first commit success ( left some log files)
>  # restart job run nomal after one successful commit
>  # kill job and restart  throws `Duplicate fileID`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)