You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/16 07:35:14 UTC

[GitHub] [spark] Yikf commented on a change in pull request #35527: [SPARK-38216][SQL] Fail early if all the columns are partitioned columns when creating a Hive table

Yikf commented on a change in pull request #35527:
URL: https://github.com/apache/spark/pull/35527#discussion_r807625071



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
##########
@@ -319,15 +319,7 @@ case class PreprocessTableCreation(sparkSession: SparkSession) extends Rule[Logi
       conf.resolver)
 
     if (schema.nonEmpty && normalizedPartitionCols.length == schema.length) {
-      if (DDLUtils.isHiveTable(table)) {

Review comment:
       There doesn't seem to be any relevant information in the commit history message, It seems we did this on purpose from comment.
   
   But in [HiveClientImpl.toHiveTable](https://github.com/apache/spark/blob/1ef5638177dcf06ebca4e9b0bc88401e0fce2ae8/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L1069-L1072), we partitioned the partition cols and other cols, If all columns are partitioned columns, `hivetabl.getFields` will get an empty result, so Hive will throw an exception cols has at least one column
   
   If Hive allows cols to inherit partitioned columns, we should not do `partition` in `toHiveTable`, if not, we should fail early, I'm sorry I'm not sure about that
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org