You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2022/11/08 11:13:56 UTC

[spark] branch master updated: [SPARK-40798][SQL][TESTS][FOLLOW-UP] Disable ANSI at the test case for DSv2

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 5ae42456f3a [SPARK-40798][SQL][TESTS][FOLLOW-UP] Disable ANSI at the test case for DSv2
5ae42456f3a is described below

commit 5ae42456f3aa96b9a7966f4c20582afc89294c06
Author: Hyukjin Kwon <gu...@apache.org>
AuthorDate: Tue Nov 8 20:13:18 2022 +0900

    [SPARK-40798][SQL][TESTS][FOLLOW-UP] Disable ANSI at the test case for DSv2
    
    ### What changes were proposed in this pull request?
    
    This PR disables ANSI at the test case for DSv2 because such partition casting wasn't supported in the legacy behaviour as well (with ANSI mode on). See also https://github.com/apache/spark/pull/38547#discussion_r1016111632
    
    ### Why are the changes needed?
    
    To recover the ANSI enabled tests. Currently it fails as below:
    
    https://github.com/apache/spark/actions/runs/3406894388/jobs/5665999638
    
    ```
    - ALTER TABLE .. ADD PARTITION using V2 catalog V2 command: SPARK-40798: Alter partition should verify partition value - legacy *** FAILED *** (18 milliseconds)
      org.apache.spark.SparkNumberFormatException: [CAST_INVALID_INPUT] The value 'aaa' of the type "STRING" cannot be cast to "INT" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
    == SQL(line 1, position 1) ==
    ALTER TABLE test_catalog.ns.tbl ADD PARTITION (p='aaa')
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      at org.apache.spark.sql.errors.QueryExecutionErrors$.invalidInputInCastToNumberError(QueryExecutionErrors.scala:161)
      at org.apache.spark.sql.catalyst.util.UTF8StringUtils$.withException(UTF8StringUtils.scala:51)
      at org.apache.spark.sql.catalyst.util.UTF8StringUtils$.toIntExact(UTF8StringUtils.scala:34)
      at org.apache.spark.sql.catalyst.expressions.Cast.$anonfun$castToInt$2(Cast.scala:927)
      at org.apache.spark.sql.catalyst.expressions.Cast.$anonfun$castToInt$2$adapted(Cast.scala:927)
      at org.apache.spark.sql.catalyst.expressions.Cast.buildCast(Cast.scala:588)
      at org.apache.spark.sql.catalyst.expressions.Cast.$anonfun$castToInt$1(Cast.scala:927)
      at org.apache.spark.sql.catalyst.expressions.Cast.nullSafeEval(Cast.scala:1285)
      at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:526)
      at org.apache.spark.sql.catalyst.analysis.ResolvePartitionSpec$.$anonfun$convertToPartIdent$1(ResolvePartitionSpec.scala:83)
      at scala.collection.immutable.List.map(List.scala:293)
      at org.apache.spark.sql.catalyst.analysis.ResolvePartitionSpec$.convertToPartIdent(ResolvePartitionSpec.scala:79)
      at org.apache.spark.sql.catalyst.analysis.ResolvePartitionSpec$.org$apache$spark$sql$catalyst$analysis$ResolvePartitionSpec$$resolvePartitionSpec(ResolvePartitionSpec.scala:72)
      at org.apache.spark.sql.catalyst.analysis.ResolvePartitionSpec$$anonfun$apply$2$$anonfun$applyOrElse$1.applyOrElse(ResolvePartitionSpec.scala:49)
      at org.apache.spark.sql.catalyst.analysis.ResolvePartitionSpec$$anonfun$apply$2$$anonfun$applyOrElse$1.applyOrElse(ResolvePartitionSpec.scala:42)
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No, test-only.
    
    ### How was this patch tested?
    
    Manually tested locally.
    
    Closes #38547 from HyukjinKwon/SPARK-40798-followup2.
    
    Authored-by: Hyukjin Kwon <gu...@apache.org>
    Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
 .../spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala
index a9ab11e483f..c33d9b0101a 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala
@@ -129,7 +129,9 @@ class AlterTableAddPartitionSuite
     withNamespaceAndTable("ns", "tbl") { t =>
       sql(s"CREATE TABLE $t (c int) $defaultUsing PARTITIONED BY (p int)")
 
-      withSQLConf(SQLConf.SKIP_TYPE_VALIDATION_ON_ALTER_PARTITION.key -> "true") {
+      withSQLConf(
+          SQLConf.SKIP_TYPE_VALIDATION_ON_ALTER_PARTITION.key -> "true",
+          SQLConf.ANSI_ENABLED.key -> "false") {
         sql(s"ALTER TABLE $t ADD PARTITION (p='aaa')")
         checkPartitions(t, Map("p" -> defaultPartitionName))
         sql(s"ALTER TABLE $t DROP PARTITION (p=null)")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org