You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Rohan (Jira)" <ji...@apache.org> on 2023/04/06 17:55:00 UTC

[jira] [Updated] (HUDI-6047) Clustering operation on consistent hashing resulting in duplicate data

     [ https://issues.apache.org/jira/browse/HUDI-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Rohan updated HUDI-6047:
------------------------
    Description: 
Hudi chooses consistent hashing bucket metadata file on the basis of r{*}eplace commit logged on hudi active timeline{*}. but {*}once hudi archives timeline{*}, it falls back to *default consistent hashing bucket metadata* that is *00000000000000.hashing_meta*  , which result in writing duplicate records in the table *.* 

above behaviour results in duplicate data in the hudi table.

 

Check the loadMetadata function of consistent hashing index implementation.

[https://github.com/apache/hudi/blob/4da64686cfbcb6471b1967091401565f58c835c7/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java#L190|http://example.com/]

 

let me know if anything else is needed.

  was:
Hudi chooses consistent hashing bucket metadata file on the basis of r{*}eplace commit logged on hudi active timeline{*}. but {*}once hudi archives timeline{*}, it falls back to *default consistent hashing bucket metadata* that is  , which result in writing duplicate records in the table *00000000000000.hashing_meta.* 

above behaviour results in duplicate data in the hudi table.

 

Check the loadMetadata function of consistent hashing index implementation.

[https://github.com/apache/hudi/blob/4da64686cfbcb6471b1967091401565f58c835c7/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java#L190|http://example.com]

 

let me know if anything else is needed.


> Clustering operation on consistent hashing resulting in duplicate data
> ----------------------------------------------------------------------
>
>                 Key: HUDI-6047
>                 URL: https://issues.apache.org/jira/browse/HUDI-6047
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Rohan
>            Priority: Major
>
> Hudi chooses consistent hashing bucket metadata file on the basis of r{*}eplace commit logged on hudi active timeline{*}. but {*}once hudi archives timeline{*}, it falls back to *default consistent hashing bucket metadata* that is *00000000000000.hashing_meta*  , which result in writing duplicate records in the table *.* 
> above behaviour results in duplicate data in the hudi table.
>  
> Check the loadMetadata function of consistent hashing index implementation.
> [https://github.com/apache/hudi/blob/4da64686cfbcb6471b1967091401565f58c835c7/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java#L190|http://example.com/]
>  
> let me know if anything else is needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)