You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "rohan-uptycs (via GitHub)" <gi...@apache.org> on 2023/04/20 05:05:45 UTC

[GitHub] [hudi] rohan-uptycs commented on a diff in pull request #8503: [HUDI-6047] Clustering operation on consistent hashing index resulting in duplicate data

rohan-uptycs commented on code in PR #8503:
URL: https://github.com/apache/hudi/pull/8503#discussion_r1172085324


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/HoodieTimelineArchiver.java:
##########
@@ -441,6 +441,8 @@ private Stream<HoodieInstant> getCommitInstantsToArchive() throws IOException {
       Option<HoodieInstant> oldestInstantToRetainForClustering =
           ClusteringUtils.getOldestInstantToRetainForClustering(table.getActiveTimeline(), table.getMetaClient());
 
+      table.getIndex().updateMetadata(table);
+

Review Comment:
   The archival process will archive replace commit from active timeline, once it does that , all the hudi writer will start referring default metadata index file that is **00000000000000.hashing_meta** , **check the function loadMetadata(HoodieTable table, String partition)**.  That's the reason it is necessary to trigger the update metadata function , so that it will bring latest metadata commit file in sync with 00000000000000.hashing_meta. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org