You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/09/09 08:06:53 UTC

[GitHub] [spark] cxzl25 commented on a change in pull request #29316: [SPARK-32508][SQL] Disallow empty part col values in partition spec before static partition writing

cxzl25 commented on a change in pull request #29316:
URL: https://github.com/apache/spark/pull/29316#discussion_r485418420



##########
File path: sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala
##########
@@ -847,4 +847,26 @@ class InsertSuite extends QueryTest with TestHiveSingleton with BeforeAndAfter
       }
     }
   }
+
+  test("SPARK-32508 " +
+    "Disallow empty part col values in partition spec before static partition writing") {
+    withTable("t1") {
+      spark.sql(
+        """
+          |CREATE TABLE t1 (c1 int)

Review comment:
       `InsertIntoHadoopFsRelationCommand`
   When `manageFilesourcePartitions` is turned on,`catalog.listPartitions` is called, here is a check to see if the partition value is empty.
   
   In the case that `manageFilesourcePartitions` is not turned on, the partition value is currently not checked, which means that the SQL execution will not fail. If I now move the check logic to the PreprocessTableInsertion rule, this will cause the execution to fail.
   
   Perhaps this check can only be performed when `tracksPartitionsInCatalog` is equal to true and the static partition is written.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org