You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2022/03/12 10:49:00 UTC
[jira] [Resolved] (SPARK-38534) Disable to_timestamp('366', 'DD') test case
[ https://issues.apache.org/jira/browse/SPARK-38534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun resolved SPARK-38534.
-----------------------------------
Fix Version/s: 3.3.0
Resolution: Fixed
Issue resolved by pull request 35825
[https://github.com/apache/spark/pull/35825]
> Disable to_timestamp('366', 'DD') test case
> -------------------------------------------
>
> Key: SPARK-38534
> URL: https://issues.apache.org/jira/browse/SPARK-38534
> Project: Spark
> Issue Type: Sub-task
> Components: SQL, Tests
> Affects Versions: 3.3.0
> Reporter: Dongjoon Hyun
> Assignee: Dongjoon Hyun
> Priority: Major
> Fix For: 3.3.0
>
>
> Currently, Daily Java 11 and 17 build are broken.
> - https://github.com/apache/spark/runs/5511239176?check_suite_focus=true
> - https://github.com/apache/spark/actions/runs/1969736117
> **Java 8**
> {code}
> $ bin/spark-shell --conf spark.sql.ansi.enabled=true
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
> 22/03/12 00:59:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Spark context Web UI available at http://172.16.0.31:4040
> Spark context available as 'sc' (master = local[*], app id = local-1647075572229).
> Spark session available as 'spark'.
> Welcome to
> ____ __
> / __/__ ___ _____/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 3.3.0-SNAPSHOT
> /_/
> Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 1.8.0_322)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> sql("select to_timestamp('366', 'DD')").show
> java.time.format.DateTimeParseException: Text '366' could not be parsed, unparsed text found at index 2. If necessary set spark.sql.ansi.enabled to false to bypass this error.
> {code}
> **Java 11+**
> {code}
> $ bin/spark-shell --conf spark.sql.ansi.enabled=true
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
> 22/03/12 01:00:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Spark context Web UI available at http://172.16.0.31:4040
> Spark context available as 'sc' (master = local[*], app id = local-1647075607932).
> Spark session available as 'spark'.
> Welcome to
> ____ __
> / __/__ ___ _____/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 3.3.0-SNAPSHOT
> /_/
> Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 11.0.12)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> sql("select to_timestamp('366', 'DD')").show
> java.time.DateTimeException: Invalid date 'DayOfYear 366' as '1970' is not a leap year. If necessary set spark.sql.ansi.enabled to false to bypass this error.
> {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org