You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/09/27 00:43:08 UTC

[GitHub] [hudi] danny0405 commented on a diff in pull request #6740: [HUDI-4897] Refactor the merge handle in CDC mode

danny0405 commented on code in PR #6740:
URL: https://github.com/apache/hudi/pull/6740#discussion_r980626021


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergeHandle.java:
##########
@@ -210,18 +203,6 @@ private void init(String fileId, String partitionPath, HoodieBaseFile baseFileTo
       // Create the writer for writing the new version file
       fileWriter = createNewFileWriter(instantTime, newFilePath, hoodieTable, config,
         writeSchemaWithMetaFields, taskContextSupplier);
-
-      // init the cdc logger
-      this.cdcEnabled = config.getBooleanOrDefault(HoodieTableConfig.CDC_ENABLED);

Review Comment:
   Makes everything into one class is hard to extend for project with huge code bases. And i don't like your refactoring to the SparkRDDRelation and HoodieWriteClient based on the same reason. I have fixed so many bugs/degression after your refactoring in flink side, that makes me feel bad and hard to go on with this project.
   
   Do you think to do a per-record logic switching for non-cdc write path is reasonable ? Sorry i don't think so.
   
   So ignored.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org