You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/02/01 21:57:47 UTC

[GitHub] [hudi] nsivabalan commented on a change in pull request #4212: [HUDI-2925] Fix duplicate cleaning of same files when unfinished clean operations are present.

nsivabalan commented on a change in pull request #4212:
URL: https://github.com/apache/hudi/pull/4212#discussion_r797066918



##########
File path: hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/AbstractHoodieWriteClient.java
##########
@@ -708,15 +708,25 @@ public HoodieCleanMetadata clean(String cleanInstantTime, boolean skipLocking) t
    * @param skipLocking if this is triggered by another parent transaction, locking can be skipped.
    */
   public HoodieCleanMetadata clean(String cleanInstantTime, boolean scheduleInline, boolean skipLocking) throws HoodieIOException {
-    if (scheduleInline) {
-      scheduleTableServiceInternal(cleanInstantTime, Option.empty(), TableServiceType.CLEAN);
-    }
     LOG.info("Cleaner started");
     final Timer.Context timerContext = metrics.getCleanCtx();
     LOG.info("Cleaned failed attempts if any");
     CleanerUtils.rollbackFailedWrites(config.getFailedWritesCleanPolicy(),
         HoodieTimeline.CLEAN_ACTION, () -> rollbackFailedWrites(skipLocking));
-    HoodieCleanMetadata metadata = createTable(config, hadoopConf).clean(context, cleanInstantTime, skipLocking);
+
+    // If there are pending clean operations, attempt them before scheduling the next clean. This prevents the
+    // next clean from deciding to clean the same files which might be under clean from pending operations.
+    HoodieCleanMetadata metadata = null;
+    HoodieTable table = createTable(config, hadoopConf);

Review comment:
       has some discussions w/ Ethan on this.here is what we feel. 
   with spurious deletes, this should not an issue. only when the config is set to false, we need to worry about this. 
   So, what we can do is, 
   whenever hudi is looking to scheduling a clean, if there was another clean inflight found, no scheduling will happen. So, only when there are no pending inflight clean, a new clean will be scheduled. 
   we don't want to trigger execution during scheduling phase. and hence.
   the other option is: every time we wanted to plan cleaning, we might have to read every other inflight clean plan which we feel is too much to ask for, for the problem we are solving. 
   bcoz, this happens only when metadata is enabled, and timeline server disabled and spurious deletes are not ignored. 
   
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org