You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/10/31 08:45:37 UTC

[GitHub] [hudi] TengHuo commented on a diff in pull request #6733: [HUDI-4880] Fix corrupted parquet file issue left over by cancelled compaction task

TengHuo commented on code in PR #6733:
URL: https://github.com/apache/hudi/pull/6733#discussion_r1009158080


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/compact/CompactionPlanOperator.java:
##########
@@ -129,9 +128,6 @@ private void scheduleCompaction(HoodieFlinkTable<?> table, long checkpointId) th
       List<CompactionOperation> operations = compactionPlan.getOperations().stream()
           .map(CompactionOperation::convertFromAvroRecordInstance).collect(toList());
       LOG.info("Execute compaction plan for instant {} as {} file groups", compactionInstantTime, operations.size());
-      WriteMarkersFactory
-          .get(table.getConfig().getMarkersType(), table, compactionInstantTime)
-          .deleteMarkerDir(table.getContext(), table.getConfig().getMarkersDeleteParallelism());

Review Comment:
   In this `HoodieMergeHandle.init` method, it will call a method `createMarkerFile` to create a marker file for the new data file when doing the compaction. So every marker file represents a new base file.
   
   https://github.com/apache/hudi/blob/efe553b327bc025d242afa37221a740dca9b1ea6/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergeHandle.java#L201
   
   In this method `HoodieTable.reconcileAgainstMarkers`, it will delete all data files which have marker files, but not in `List<HoodieWriteStat> stats`.
   
   https://github.com/apache/hudi/blob/4f6f15c3c761621eaaa1b3b52e0c2841626afe53/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/HoodieTable.java#L674
   
   In summary, `WriteMarkers` contains all data files we created in the current batch (and previous failed batches for the same instant), `List<HoodieWriteStat> stats` contains all data files we committed in the current batch.
   
   So, `Set(invalidDataPaths)` = `Set(data file in WriteMarkers)` - `Set(data file in List<HoodieWriteStat> stats)`
   
   In Hudi Flink online compaction, if there is anything wrong in compaction, it will do the retry automatically (won't restart the whole pipeline, only retry compaction). **So If we delete the marker file directory here, it is not possible to delete the files left by a previous failed compaction (a failed compaction with same instant), because all their marker files are deleted.** These un-committed data files will cause `corrupted data file exception` in future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org