You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/02/24 04:26:22 UTC

[GitHub] [hudi] simonsssu commented on a change in pull request #2593: [HUDI-1632] Supports merge on read write mode for Flink writer

simonsssu commented on a change in pull request #2593:
URL: https://github.com/apache/hudi/pull/2593#discussion_r581614750



##########
File path: hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/HoodieFlinkWriteClient.java
##########
@@ -208,12 +210,32 @@ public void bootstrap(Option<Map<String, String>> extraMetadata) {
 
   @Override
   public void commitCompaction(String compactionInstantTime, List<WriteStatus> writeStatuses, Option<Map<String, String>> extraMetadata) throws IOException {
-    throw new HoodieNotSupportedException("Compaction is not supported yet");
+    HoodieFlinkTable<T> table = HoodieFlinkTable.create(config, (HoodieFlinkEngineContext) context);
+    HoodieCommitMetadata metadata = FlinkCompactHelpers.newInstance().createCompactionMetadata(
+        table, compactionInstantTime, writeStatuses, config.getSchema());
+    extraMetadata.ifPresent(m -> m.forEach(metadata::addMetadata));
+    completeCompaction(metadata, writeStatuses, table, compactionInstantTime);
   }
 

Review comment:
       Seems it just contains commitCompaction and planScheduler, where is the real compact logic ? I see there exists a CompactFunction and CompactCommitSink , but I don't find where to call these. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org