You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/07 05:47:40 UTC

[GitHub] [spark] LuciferYang opened a new pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

LuciferYang opened a new pull request #35412:
URL: https://github.com/apache/spark/pull/35412


   ### What changes were proposed in this pull request?
   SPARK-33541 introduces `QueryExecutionErrors#castingCauseOverflowError` and there are 2 ways for input parameter `targetType` in Spark code now:
   
   - Use `DataType.catalogString` as `targetType`, such as use in `Cast.scala` and `IntervalUtils.scala`
   - Use custom literal such as `short`,`int` and `long` in `Decimal.scala` and `numberics.scala`
   
   This pr change to unified use `DataType.catalogString` as the `targetType`.
   
   
   ### Why are the changes needed?
   Unified use `DataType.catalogString` as `targetType` when throwing castingCauseOverflowError
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   Pass GA


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on a change in pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on a change in pull request #35412:
URL: https://github.com/apache/spark/pull/35412#discussion_r801604303



##########
File path: sql/core/src/test/resources/sql-tests/results/postgreSQL/float4.sql.out
##########
@@ -375,7 +375,7 @@ SELECT bigint(float('-9223380000000000000'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Casting -9.22338E18 to int causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.
+Casting -9.22338E18 to bigint causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.

Review comment:
       SELECT bigint(float('-9223380000000000000'))




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
MaxGekk commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1034716235


   +1, LGTM. Merging to master.
   Thank you, @LuciferYang, and @HyukjinKwon @srowen for review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk closed pull request #35412: [SPARK-38123][SQL] Unified use `DataType` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
MaxGekk closed pull request #35412:
URL: https://github.com/apache/spark/pull/35412


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
MaxGekk commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1033531823


   > I didn't find how to call the new method in this code, so I kept the original method @MaxGekk
   
   @LuciferYang Something like this:
   ```scala
   val catalogType2 = IntegerType
   val ct = ctx.addReferenceObj("catalogType", catalogType2, catalogType2.getClass.getName)
   ...
   throw QueryExecutionErrors.castingCauseOverflowError($c, "$ct");
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1032797089


   cc @srowen @dongjoon-hyun @HyukjinKwon 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1034820364


   thanks all ~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1033562951


   @MaxGekk Thank you for your guidance, already update the pr, waiting ci


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on a change in pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on a change in pull request #35412:
URL: https://github.com/apache/spark/pull/35412#discussion_r801604497



##########
File path: sql/core/src/test/resources/sql-tests/results/postgreSQL/int8.sql.out
##########
@@ -661,7 +661,7 @@ SELECT CAST(double('922337203685477580700.0') AS bigint)
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Casting 9.223372036854776E20 to long causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.
+Casting 9.223372036854776E20 to bigint causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.

Review comment:
       SELECT CAST(double('922337203685477580700.0') AS bigint)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1033418583


   If the above code is also changed to use the new method, there wil be some UTs failed. For example `ANSI mode: Throw exception on casting out-of-range value to long type` in `AnsiCastSuiteWithAnsiModeOn`, the failed reason is 
   
   ```
   Caused by: java.lang.IllegalArgumentException: Can not interpolate org.apache.spark.sql.types.LongType$ into code block.
   ``` 
   
   the error stack as follows:
   
   ```
   (codegen mode) Expected exception java.lang.ArithmeticException to be thrown, but java.lang.IllegalArgumentException was thrown
   ScalaTestFailureLocation: org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper at (ExpressionEvalHelper.scala:164)
   org.scalatest.exceptions.TestFailedException: (codegen mode) Expected exception java.lang.ArithmeticException to be thrown, but java.lang.IllegalArgumentException was thrown
   	at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
   	at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
   	at org.scalatest.funsuite.AnyFunSuite.newAssertionFailedException(AnyFunSuite.scala:1563)
   	at org.scalatest.Assertions.intercept(Assertions.scala:756)
   	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
   	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1563)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.$anonfun$checkExceptionInExpression$1(ExpressionEvalHelper.scala:164)
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   	at org.scalatest.Assertions.withClue(Assertions.scala:1065)
   	at org.scalatest.Assertions.withClue$(Assertions.scala:1052)
   	at org.scalatest.funsuite.AnyFunSuite.withClue(AnyFunSuite.scala:1563)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.checkException$1(ExpressionEvalHelper.scala:163)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.checkExceptionInExpression(ExpressionEvalHelper.scala:184)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.checkExceptionInExpression$(ExpressionEvalHelper.scala:156)
   	at org.apache.spark.sql.catalyst.expressions.CastSuiteBase.checkExceptionInExpression(CastSuiteBase.scala:48)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.checkExceptionInExpression(ExpressionEvalHelper.scala:153)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.checkExceptionInExpression$(ExpressionEvalHelper.scala:150)
   	at org.apache.spark.sql.catalyst.expressions.CastSuiteBase.checkExceptionInExpression(CastSuiteBase.scala:48)
   	at org.apache.spark.sql.catalyst.expressions.AnsiCastSuiteBase.$anonfun$testLongMaxAndMin$1(AnsiCastSuiteBase.scala:59)
   	at org.apache.spark.sql.catalyst.expressions.AnsiCastSuiteBase.$anonfun$testLongMaxAndMin$1$adapted(AnsiCastSuiteBase.scala:56)
   	at scala.collection.immutable.List.foreach(List.scala:431)
   	at org.apache.spark.sql.catalyst.expressions.AnsiCastSuiteBase.testLongMaxAndMin(AnsiCastSuiteBase.scala:56)
   	at org.apache.spark.sql.catalyst.expressions.AnsiCastSuiteBase.$anonfun$new$31(AnsiCastSuiteBase.scala:119)
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
   	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
   	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
   	at org.scalatest.Transformer.apply(Transformer.scala:22)
   	at org.scalatest.Transformer.apply(Transformer.scala:20)
   	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190)
   	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:203)
   	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188)
   	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200)
   	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
   	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200)
   	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182)
   	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:64)
   	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
   	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
   	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:64)
   	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:233)
   	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
   	at scala.collection.immutable.List.foreach(List.scala:431)
   	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
   	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
   	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
   	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:233)
   	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:232)
   	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563)
   	at org.scalatest.Suite.run(Suite.scala:1112)
   	at org.scalatest.Suite.run$(Suite.scala:1094)
   	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563)
   	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:237)
   	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
   	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:237)
   	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:236)
   	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:64)
   	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
   	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
   	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
   	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:64)
   	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
   	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1320)
   	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1314)
   	at scala.collection.immutable.List.foreach(List.scala:431)
   	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1314)
   	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:993)
   	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:971)
   	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1480)
   	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:971)
   	at org.scalatest.tools.Runner$.run(Runner.scala:798)
   	at org.scalatest.tools.Runner.run(Runner.scala)
   	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2or3(ScalaTestRunner.java:38)
   	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:25)
   Caused by: java.lang.IllegalArgumentException: Can not interpolate org.apache.spark.sql.types.LongType$ into code block.
   	at org.apache.spark.sql.errors.QueryExecutionErrors$.cannotInterpolateClassIntoCodeBlockError(QueryExecutionErrors.scala:318)
   	at org.apache.spark.sql.catalyst.expressions.codegen.Block$BlockHelper$.$anonfun$code$1(javaCode.scala:240)
   	at org.apache.spark.sql.catalyst.expressions.codegen.Block$BlockHelper$.$anonfun$code$1$adapted(javaCode.scala:237)
   	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
   	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
   	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
   	at org.apache.spark.sql.catalyst.expressions.codegen.Block$BlockHelper$.code$extension(javaCode.scala:237)
   	at org.apache.spark.sql.catalyst.expressions.CastBase.$anonfun$castFractionToIntegralTypeCode$1(Cast.scala:1732)
   	at org.apache.spark.sql.catalyst.expressions.CastBase.castCode(Cast.scala:1045)
   	at org.apache.spark.sql.catalyst.expressions.CastBase.doGenCode(Cast.scala:995)
   	at org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:151)
   	at scala.Option.getOrElse(Option.scala:189)
   	at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:146)
   	at org.apache.spark.sql.catalyst.expressions.CastBase.genCode(Cast.scala:986)
   	at org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.$anonfun$generateExpressions$1(CodeGenerator.scala:1272)
   	at scala.collection.immutable.List.map(List.scala:293)
   	at org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.generateExpressions(CodeGenerator.scala:1272)
   	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.create(GenerateMutableProjection.scala:64)
   	at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:49)
   	at org.apache.spark.sql.catalyst.expressions.MutableProjection$.createCodeGeneratedObject(Projection.scala:84)
   	at org.apache.spark.sql.catalyst.expressions.MutableProjection$.createCodeGeneratedObject(Projection.scala:80)
   	at org.apache.spark.sql.catalyst.expressions.CodeGeneratorWithInterpretedFallback.createObject(CodeGeneratorWithInterpretedFallback.scala:47)
   	at org.apache.spark.sql.catalyst.expressions.MutableProjection$.create(Projection.scala:95)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.evaluateWithMutableProjection(ExpressionEvalHelper.scala:235)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.evaluateWithMutableProjection$(ExpressionEvalHelper.scala:232)
   	at org.apache.spark.sql.catalyst.expressions.CastSuiteBase.evaluateWithMutableProjection(CastSuiteBase.scala:48)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.$anonfun$checkExceptionInExpression$5(ExpressionEvalHelper.scala:184)
   	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
   	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
   	at org.apache.spark.sql.catalyst.expressions.CastSuiteBase.withSQLConf(CastSuiteBase.scala:48)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.$anonfun$checkExceptionInExpression$3(ExpressionEvalHelper.scala:167)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.$anonfun$checkExceptionInExpression$3$adapted(ExpressionEvalHelper.scala:165)
   	at scala.collection.immutable.List.foreach(List.scala:431)
   	at org.apache.spark.sql.catalyst.expressions.ExpressionEvalHelper.$anonfun$checkExceptionInExpression$2(ExpressionEvalHelper.scala:165)
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   	at org.scalatest.Assertions.intercept(Assertions.scala:749)
   	... 70 more
   ```
   
   @MaxGekk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on a change in pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on a change in pull request #35412:
URL: https://github.com/apache/spark/pull/35412#discussion_r800333662



##########
File path: sql/catalyst/src/main/scala/org/apache/spark/sql/types/numerics.scala
##########
@@ -135,15 +135,15 @@ private[sql] object FloatExactNumeric extends FloatIsFractional {
     if (Math.floor(x) <= intUpperBound && Math.ceil(x) >= intLowerBound) {
       x.toInt
     } else {
-      throw QueryExecutionErrors.castingCauseOverflowError(x, "int")
+      throw QueryExecutionErrors.castingCauseOverflowError(x, IntegerType.catalogString)
     }
   }
 
   override def toLong(x: Float): Long = {
     if (Math.floor(x) <= longUpperBound && Math.ceil(x) >= longLowerBound) {
       x.toLong
     } else {
-      throw QueryExecutionErrors.castingCauseOverflowError(x, "int")

Review comment:
       this method is `toLong`, the `targetType` should be `long`, not `int`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on a change in pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on a change in pull request #35412:
URL: https://github.com/apache/spark/pull/35412#discussion_r801604696



##########
File path: sql/core/src/test/resources/sql-tests/results/postgreSQL/float8.sql.out
##########
@@ -833,7 +833,7 @@ SELECT bigint(double('-9223372036854780000'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Casting -9.22337203685478E18 to long causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.
+Casting -9.22337203685478E18 to bigint causes overflow. To return NULL instead, use 'try_cast'. If necessary set spark.sql.ansi.enabled to false to bypass this error.

Review comment:
       SELECT bigint(double('-9223372036854780000'))




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1033235923


   cc @MaxGekk FYI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] LuciferYang commented on pull request #35412: [SPARK-38123][SQL] Unified use `DataType.catalogString` as `targetType` of `QueryExecutionErrors#castingCauseOverflowError`

Posted by GitBox <gi...@apache.org>.
LuciferYang commented on pull request #35412:
URL: https://github.com/apache/spark/pull/35412#issuecomment-1033404678


   > I would just pass types into castingCauseOverflowError() and invoke catalogString inside of the method. Don't see any reasons to call catalogString every time.
   
   [c36aa59](https://github.com/apache/spark/pull/35412/commits/c36aa59063f89a8fb24f3fa3168b423234786472) fix this.
   
   But codegen related code in `Cast.scala` needs  the original method, for example:
   
   https://github.com/apache/spark/blob/8a559b317f60e2630717a01424a7663e451fb3ce/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala#L1639-L1657
   
   
   I didn't find how to call the new method in this code, so I kept the original method @MaxGekk 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org