You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/05/04 18:54:49 UTC

[GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1580: [HUDI-852] adding check for table name for Append Save mode

vinothchandar commented on a change in pull request #1580:
URL: https://github.com/apache/incubator-hudi/pull/1580#discussion_r419654885



##########
File path: hudi-spark/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala
##########
@@ -83,6 +83,13 @@ private[hudi] object HoodieSparkSqlWriter {
     val fs = basePath.getFileSystem(sparkContext.hadoopConfiguration)
     var exists = fs.exists(new Path(basePath, HoodieTableMetaClient.METAFOLDER_NAME))
 
+    if (exists && mode == SaveMode.Append) {
+      val existingTableName = new HoodieTableMetaClient(sparkContext.hadoopConfiguration, path.get).getTableConfig.getTableName
+      if (!existingTableName.equals(tblName.get)) {
+        throw new HoodieException(s"hoodie table with name $existingTableName already exist at $basePath")

Review comment:
       actually we could have pushed this check into the guts of `hoodie-client`.. Every write will initialize a HoodieTableMetaClient anyway.. And have it error out from inside, it will save this extra tableMetaClient initialization..  @bhasudha let's file a follow up, if you agree




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org