You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/11/15 05:32:28 UTC

[GitHub] [hudi] YannByron opened a new pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

YannByron opened a new pull request #3998:
URL: https://github.com/apache/hudi/pull/3998


   …log table and hoodie table
   
   ## *Tips*
   - *Thank you very much for contributing to Apache Hudi.*
   - *Please review https://hudi.apache.org/contribute/how-to-contribute before opening a pull request.*
   
   ## What is the purpose of the pull request
   
   *(For example: This pull request adds quick-start document.)*
   
   ## Brief change log
   
   *(for example:)*
     - *Modify AnnotationLocation checkstyle rule in checkstyle.xml*
   
   ## Verify this pull request
   
   *(Please pick either of the following options)*
   
   This pull request is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This pull request is already covered by existing tests, such as *(please describe tests)*.
   
   (or)
   
   This change added tests and can be verified as follows:
   
   *(example:)*
   
     - *Added integration tests for end-to-end.*
     - *Added HoodieClientWriteTest to verify the change.*
     - *Manually verified the change by running a job locally.*
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] YannByron commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
YannByron commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r753751992



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }

Review comment:
       if `hoodie.datasource.write.drop.partition.columns` is false, tableSchema doesn't contains partition columns. And dataSchema generated by the codes above will be same with tableSchema. So, i think there is not necessary to change. 

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }

Review comment:
       you're right. will be modified.

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }
+
+  /**
+   * @return schema, table parameters in which all parameters aren't sql-styled.
+   */
+  private def parseSchemaAndConfigs(): (StructType, Map[String, String]) = {
+    val sqlOptions = HoodieOptionConfig.defaultSqlOptions ++ catalogProperties
+
+    // get final schema and parameters
+    val (finalSchema, tableConfigs) = (table.tableType, hoodieTableExists) match {
+      case (CatalogTableType.EXTERNAL, true) =>
+        val existingTableConfig = tableConfig.getProps.asScala.toMap
+        val catalogTableProps = HoodieOptionConfig.mappingSqlOptionToTableConfig(catalogProperties)
+        validateTableConfig(spark, catalogTableProps, convertMapToHoodieConfig(existingTableConfig))
+
+        val options = extraTableConfig(spark, hoodieTableExists, existingTableConfig) ++
+          HoodieOptionConfig.mappingSqlOptionToTableConfig(sqlOptions) ++ existingTableConfig
+
+        val userSpecifiedSchema = table.schema
+        val schema = if (tableSchema.nonEmpty) {
+          tableSchema
+        } else if (userSpecifiedSchema.nonEmpty) {
+          addMetaFields(userSpecifiedSchema)
+        } else {
+          throw new IllegalArgumentException(s"Missing schema for Create Table: $tableName")
+        }
+
+        (schema, options)
+
+      case (_, false) =>
+        assert(table.schema.nonEmpty, s"Missing schema for Create Table: $tableName")

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974763042


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   * be86e20b85317383e41db5895aa5aa71dfdf85ea UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974809647


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] xushiyan commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
xushiyan commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-978232606


   @vinothchandar reran it and the build is ok.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] xushiyan commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
xushiyan commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r755874086



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala
##########
@@ -182,17 +124,18 @@ case class CreateHoodieTableCommand(table: CatalogTable, ignoreIfExists: Boolean
       table.storage.compressed,
       storageProperties + ("path" -> path))
 
-    val newDatabaseName = formatName(table.identifier.database
-      .getOrElse(sessionState.catalog.getCurrentDatabase))
+    val tablName = HoodieSqlUtils.formatName(sparkSession, table.identifier.table)
+    val newDatabaseName = HoodieSqlUtils.formatName(sparkSession, table.identifier.database
+      .getOrElse(catalog.getCurrentDatabase))
 
     val newTableIdentifier = table.identifier
-      .copy(table = tableName, database = Some(newDatabaseName))
+      .copy(table = tablName, database = Some(newDatabaseName))

Review comment:
       does not look like a better name :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968554702


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-976114571


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611",
       "triggerID" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   * 8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974792230


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974763247


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975572445


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] YannByron commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
YannByron commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r753752076



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }

Review comment:
       you're right. will be modified.

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }
+
+  /**
+   * @return schema, table parameters in which all parameters aren't sql-styled.
+   */
+  private def parseSchemaAndConfigs(): (StructType, Map[String, String]) = {
+    val sqlOptions = HoodieOptionConfig.defaultSqlOptions ++ catalogProperties
+
+    // get final schema and parameters
+    val (finalSchema, tableConfigs) = (table.tableType, hoodieTableExists) match {
+      case (CatalogTableType.EXTERNAL, true) =>
+        val existingTableConfig = tableConfig.getProps.asScala.toMap
+        val catalogTableProps = HoodieOptionConfig.mappingSqlOptionToTableConfig(catalogProperties)
+        validateTableConfig(spark, catalogTableProps, convertMapToHoodieConfig(existingTableConfig))
+
+        val options = extraTableConfig(spark, hoodieTableExists, existingTableConfig) ++
+          HoodieOptionConfig.mappingSqlOptionToTableConfig(sqlOptions) ++ existingTableConfig
+
+        val userSpecifiedSchema = table.schema
+        val schema = if (tableSchema.nonEmpty) {
+          tableSchema
+        } else if (userSpecifiedSchema.nonEmpty) {
+          addMetaFields(userSpecifiedSchema)
+        } else {
+          throw new IllegalArgumentException(s"Missing schema for Create Table: $tableName")
+        }
+
+        (schema, options)
+
+      case (_, false) =>
+        assert(table.schema.nonEmpty, s"Missing schema for Create Table: $tableName")

Review comment:
       ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974792230


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974809647


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-976149196


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611",
       "triggerID" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974799691


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974768543


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] xushiyan merged pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
xushiyan merged pull request #3998:
URL: https://github.com/apache/hudi/pull/3998


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974768543


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974800507


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968555682


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968554702


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974763042


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   * be86e20b85317383e41db5895aa5aa71dfdf85ea UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974791885


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   * b451b3b4544ef112a8573d1040dfcc23e19a610d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968575437






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-976114571


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611",
       "triggerID" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   * 8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3611) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] YannByron commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
YannByron commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968583674


   @xushiyan can you help to review this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] vinothchandar commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
vinothchandar commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-977822734


   @xushiyan this seems to have broken master?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] YannByron commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
YannByron commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r753751992



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }

Review comment:
       if `hoodie.datasource.write.drop.partition.columns` is false, tableSchema doesn't contains partition columns. And dataSchema generated by the codes above will be same with tableSchema. So, i think there is not necessary to change. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968575437


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975678762


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-976113745


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   * 8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974800507


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] xushiyan commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
xushiyan commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r753747739



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }

Review comment:
       if `hoodie.datasource.write.drop.partition.columns` is false, then shall we keep the dataSchema same as tableSchema?

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }
+
+  /**
+   * @return schema, table parameters in which all parameters aren't sql-styled.
+   */
+  private def parseSchemaAndConfigs(): (StructType, Map[String, String]) = {
+    val sqlOptions = HoodieOptionConfig.defaultSqlOptions ++ catalogProperties
+
+    // get final schema and parameters
+    val (finalSchema, tableConfigs) = (table.tableType, hoodieTableExists) match {
+      case (CatalogTableType.EXTERNAL, true) =>
+        val existingTableConfig = tableConfig.getProps.asScala.toMap
+        val catalogTableProps = HoodieOptionConfig.mappingSqlOptionToTableConfig(catalogProperties)
+        validateTableConfig(spark, catalogTableProps, convertMapToHoodieConfig(existingTableConfig))
+
+        val options = extraTableConfig(spark, hoodieTableExists, existingTableConfig) ++
+          HoodieOptionConfig.mappingSqlOptionToTableConfig(sqlOptions) ++ existingTableConfig
+
+        val userSpecifiedSchema = table.schema
+        val schema = if (tableSchema.nonEmpty) {
+          tableSchema
+        } else if (userSpecifiedSchema.nonEmpty) {
+          addMetaFields(userSpecifiedSchema)
+        } else {
+          throw new IllegalArgumentException(s"Missing schema for Create Table: $tableName")
+        }
+
+        (schema, options)
+
+      case (_, false) =>
+        assert(table.schema.nonEmpty, s"Missing schema for Create Table: $tableName")

Review comment:
       to align with exception in L185, can we use `ValidationUtils#checkArgument` here

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }

Review comment:
       don't see any check or early return here. should just call it `initHoodieTable()` ? also don't see `force` is used




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975575201


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975572445


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975678762


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968555682


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-968575437


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] xushiyan commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
xushiyan commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r753747739



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }

Review comment:
       if `hoodie.datasource.write.drop.partition.columns` is false, then shall we keep the dataSchema same as tableSchema?

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }
+
+  /**
+   * @return schema, table parameters in which all parameters aren't sql-styled.
+   */
+  private def parseSchemaAndConfigs(): (StructType, Map[String, String]) = {
+    val sqlOptions = HoodieOptionConfig.defaultSqlOptions ++ catalogProperties
+
+    // get final schema and parameters
+    val (finalSchema, tableConfigs) = (table.tableType, hoodieTableExists) match {
+      case (CatalogTableType.EXTERNAL, true) =>
+        val existingTableConfig = tableConfig.getProps.asScala.toMap
+        val catalogTableProps = HoodieOptionConfig.mappingSqlOptionToTableConfig(catalogProperties)
+        validateTableConfig(spark, catalogTableProps, convertMapToHoodieConfig(existingTableConfig))
+
+        val options = extraTableConfig(spark, hoodieTableExists, existingTableConfig) ++
+          HoodieOptionConfig.mappingSqlOptionToTableConfig(sqlOptions) ++ existingTableConfig
+
+        val userSpecifiedSchema = table.schema
+        val schema = if (tableSchema.nonEmpty) {
+          tableSchema
+        } else if (userSpecifiedSchema.nonEmpty) {
+          addMetaFields(userSpecifiedSchema)
+        } else {
+          throw new IllegalArgumentException(s"Missing schema for Create Table: $tableName")
+        }
+
+        (schema, options)
+
+      case (_, false) =>
+        assert(table.schema.nonEmpty, s"Missing schema for Create Table: $tableName")

Review comment:
       to align with exception in L185, can we use `ValidationUtils#checkArgument` here

##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/catalyst/catalog/HoodieCatalogTable.scala
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.catalog
+
+import org.apache.hudi.HoodieWriterUtils.{convertMapToHoodieConfig, validateTableConfig}
+import org.apache.hudi.common.model.{HoodieCommitMetadata, HoodieTableType}
+import org.apache.hudi.common.table.HoodieTableConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.keygen.ComplexKeyGenerator
+import org.apache.hudi.keygen.factory.HoodieSparkKeyGeneratorFactory
+
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.{AnalysisException, SparkSession}
+import org.apache.spark.sql.avro.SchemaConverters
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hudi.{HoodieOptionConfig, HoodieSqlUtils}
+import org.apache.spark.sql.hudi.HoodieSqlUtils._
+import org.apache.spark.sql.types.{StructField, StructType}
+
+import java.util.{Locale, Properties}
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable
+
+/**
+ * A wrapper of hoodie CatalogTable instance and hoodie Table.
+ */
+class HoodieCatalogTable(val spark: SparkSession, val table: CatalogTable) extends Logging {
+
+  assert(table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi", "It's not a Hudi table")
+
+  private val hadoopConf = spark.sessionState.newHadoopConf
+
+  /**
+   * properties defined in catalog.
+   */
+  val catalogProperties: Map[String, String] = table.storage.properties ++ table.properties
+
+  /**
+   * hoodie table's location.
+   * if create managed hoodie table, use `catalog.defaultTablePath`.
+   */
+  val tableLocation: String = HoodieSqlUtils.getTableLocation(table, spark)
+
+  /**
+   * A flag to whether the hoodie table exists.
+   */
+  val hoodieTableExists: Boolean = tableExistsInPath(tableLocation, hadoopConf)
+
+  /**
+   * Meta Client.
+   */
+  lazy val metaClient: HoodieTableMetaClient = HoodieTableMetaClient.builder()
+    .setBasePath(tableLocation)
+    .setConf(hadoopConf)
+    .build()
+
+  /**
+   * Hoodie Table Config
+   */
+  lazy val tableConfig: HoodieTableConfig = metaClient.getTableConfig
+
+  /**
+   * the name of table
+   */
+  lazy val tableName: String = tableConfig.getTableName
+
+  /**
+   * The name of type of table
+   */
+  lazy val tableType: HoodieTableType = tableConfig.getTableType
+
+  /**
+   * The type of table
+   */
+  lazy val tableTypeName: String = tableType.name()
+
+  /**
+   * Recored Field List(Primary Key List)
+   */
+  lazy val primaryKeys: Array[String] = tableConfig.getRecordKeyFields.orElse(Array.empty)
+
+  /**
+   * PreCombine Field
+   */
+  lazy val preCombineKey: Option[String] = Option(tableConfig.getPreCombineField)
+
+  /**
+   * Paritition Fields
+   */
+  lazy val partitionFields: Array[String] = tableConfig.getPartitionFields.orElse(Array.empty)
+
+  /**
+   * The schema of table.
+   * Make StructField nullable.
+   */
+  lazy val tableSchema: StructType = {
+    val originSchema = getTableSqlSchema(metaClient, includeMetadataFields = true).get
+    StructType(originSchema.map(_.copy(nullable = true)))
+  }
+
+  /**
+   * The schema without hoodie meta fields
+   */
+  lazy val tableSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(tableSchema)
+
+  /**
+   * The schema of data fields
+   */
+  lazy val dataSchema: StructType = {
+    StructType(tableSchema.filterNot(f => partitionFields.contains(f.name)))
+  }
+
+  /**
+   * The schema of data fields not including hoodie meta fields
+   */
+  lazy val dataSchemaWithoutMetaFields: StructType = HoodieSqlUtils.removeMetaFields(dataSchema)
+
+  /**
+   * The schema of partition fields
+   */
+  lazy val partitionSchema: StructType = StructType(tableSchema.filter(f => partitionFields.contains(f.name)))
+
+  /**
+   * All the partition paths
+   */
+  def getAllPartitionPaths: Seq[String] = HoodieSqlUtils.getAllPartitionPaths(spark, table)
+
+  /**
+   * init hoodie table for create table (as select)
+   */
+  def initHoodieTableIfNeeded(force: Boolean = false): Unit = {
+    logInfo(s"Init hoodie.properties for ${table.identifier.unquotedString}")
+    val (finalSchema, tableConfigs) = parseSchemaAndConfigs()
+
+    // Save all the table config to the hoodie.properties.
+    val properties = new Properties()
+    properties.putAll(tableConfigs.asJava)
+
+    HoodieTableMetaClient.withPropertyBuilder()
+      .fromProperties(properties)
+      .setTableName(table.identifier.table)
+      .setTableCreateSchema(SchemaConverters.toAvroType(finalSchema).toString())
+      .setPartitionFields(table.partitionColumnNames.mkString(","))
+      .initTable(hadoopConf, tableLocation)
+  }

Review comment:
       don't see any check or early return here. should just call it `initHoodieTable()` ? also don't see `force` is used




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974763247


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fc63a8f7b736b09d3f7593963c11438730e96793 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370) 
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974799691


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b451b3b4544ef112a8573d1040dfcc23e19a610d Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553) 
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974791885


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * be86e20b85317383e41db5895aa5aa71dfdf85ea Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549) 
   * b451b3b4544ef112a8573d1040dfcc23e19a610d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable as a bridge between spark cata…

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-974763042






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] YannByron commented on a change in pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
YannByron commented on a change in pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#discussion_r755890704



##########
File path: hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala
##########
@@ -182,17 +124,18 @@ case class CreateHoodieTableCommand(table: CatalogTable, ignoreIfExists: Boolean
       table.storage.compressed,
       storageProperties + ("path" -> path))
 
-    val newDatabaseName = formatName(table.identifier.database
-      .getOrElse(sessionState.catalog.getCurrentDatabase))
+    val tablName = HoodieSqlUtils.formatName(sparkSession, table.identifier.table)
+    val newDatabaseName = HoodieSqlUtils.formatName(sparkSession, table.identifier.database
+      .getOrElse(catalog.getCurrentDatabase))
 
     val newTableIdentifier = table.identifier
-      .copy(table = tableName, database = Some(newDatabaseName))
+      .copy(table = tablName, database = Some(newDatabaseName))

Review comment:
       Spelling mistakes ~.~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot removed a comment on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot removed a comment on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-975575201


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 256c4c8c909ae78b6c9fbfa9e58e008f5906de8c Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554) 
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hudi] hudi-bot commented on pull request #3998: [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table

Posted by GitBox <gi...@apache.org>.
hudi-bot commented on pull request #3998:
URL: https://github.com/apache/hudi/pull/3998#issuecomment-976113745


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3370",
       "triggerID" : "fc63a8f7b736b09d3f7593963c11438730e96793",
       "triggerType" : "PUSH"
     }, {
       "hash" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3549",
       "triggerID" : "be86e20b85317383e41db5895aa5aa71dfdf85ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3553",
       "triggerID" : "b451b3b4544ef112a8573d1040dfcc23e19a610d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3554",
       "triggerID" : "256c4c8c909ae78b6c9fbfa9e58e008f5906de8c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594",
       "triggerID" : "0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0993f73f9bbe97df8f0e6eb5c93bea1ed0317eba Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=3594) 
   * 8c1b74a3dc22d8b0cafdc90a9b5e877f7a83580b UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org