You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "xushiyan (via GitHub)" <gi...@apache.org> on 2023/03/21 01:43:33 UTC

[GitHub] [hudi] xushiyan commented on a diff in pull request #8248: [HUDI-5962] Adding timeline server support to integ test suite

xushiyan commented on code in PR #8248:
URL: https://github.com/apache/hudi/pull/8248#discussion_r1142820153


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##########
@@ -1236,12 +1236,14 @@ protected void releaseResources(String instantTime) {
 
   @Override
   public void close() {
+    LOG.info("XXX Closing WriteClient ");
     // Stop timeline-server if running
     super.close();
     // Calling this here releases any resources used by your index, so make sure to finish any related operations
     // before this point
     this.index.close();
     this.tableServiceClient.close();
+    LOG.info("XXX Completed closing write client");

Review Comment:
   fix the log content?



##########
hudi-integ-test/src/main/java/org/apache/hudi/integ/testsuite/HoodieInlineTestSuiteWriter.java:
##########
@@ -65,7 +65,14 @@ public HoodieInlineTestSuiteWriter(JavaSparkContext jsc, Properties props, Hoodi
   }
 
   public void shutdownResources() {
-    // no-op for non continuous mode test suite writer.
+    if (cfg.useDeltaStreamer) {
+      log.info("Shutting down DS wrapper gracefully ");
+      this.deltaStreamerWrapper.shutdownGracefully();

Review Comment:
   if deltastreamer wrapper is not null, we can just shut it down? as in we should not need to check the config to do this



##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -649,12 +649,13 @@ object HoodieSparkSqlWriter {
     val tableSchemaResolver = new TableSchemaResolver(tableMetaClient)
     val latestTableSchemaFromCommitMetadata =
       toScalaOption(tableSchemaResolver.getTableAvroSchemaFromLatestCommit(false))
-    latestTableSchemaFromCommitMetadata.orElse {
+    latestTableSchemaFromCommitMetadata
+      /*.orElse {
       getCatalogTable(spark, tableId).map { catalogTable =>
         val (structName, namespace) = getAvroRecordNameAndNamespace(tableId.table)
         convertStructTypeToAvroSchema(catalogTable.schema, structName, namespace)
       }
-    }
+    }*/

Review Comment:
   clean up commented code



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org