You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "xushiyan (via GitHub)" <gi...@apache.org> on 2023/04/21 05:33:13 UTC

[GitHub] [hudi] xushiyan commented on a diff in pull request #8520: [HUDI-6115] Hardening expectation of corruptRecordColumn in ChainedTransformer.

xushiyan commented on code in PR #8520:
URL: https://github.com/apache/hudi/pull/8520#discussion_r1173318675


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/transform/ChainedTransformer.java:
##########
@@ -46,9 +53,33 @@ public List<String> getTransformersNames() {
   @Override
   public Dataset<Row> apply(JavaSparkContext jsc, SparkSession sparkSession, Dataset<Row> rowDataset, TypedProperties properties) {
     Dataset<Row> dataset = rowDataset;
+    boolean isErrorTableEnabled = properties.getBoolean(ERROR_TABLE_ENABLED.key(), ERROR_TABLE_ENABLED.defaultValue());
+    if (isErrorTableEnabled && !isErrorRecordPresent(dataset)) {
+      dataset = dataset.withColumn(ERROR_TABLE_CURRUPT_RECORD_COL_NAME, lit(null));
+    }
     for (Transformer t : transformers) {
       dataset = t.apply(jsc, sparkSession, dataset, properties);
+      // validate in every stage to ensure it's not dropped by one of the transformer and added by next transformer.
+      validate(dataset, isErrorTableEnabled);

Review Comment:
   if using error table requires some column to be present, we should enforce this rule within error table itself. Transformer should not need to know about whether error table enabled or not. There could be a cleaner solution to solve this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org