You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lu Lu (Jira)" <ji...@apache.org> on 2020/12/06 12:14:00 UTC
[jira] [Created] (SPARK-33677) LikeSimplification should be skipped
if escape is a wildcard character
Lu Lu created SPARK-33677:
-----------------------------
Summary: LikeSimplification should be skipped if escape is a wildcard character
Key: SPARK-33677
URL: https://issues.apache.org/jira/browse/SPARK-33677
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 3.0.1
Reporter: Lu Lu
Assignee: Lu Lu
Fix For: 3.1.0
In ANSI mode, schema string parsing should fail if the schema uses ANSI reserved keyword as attribute name:
{code:scala}
spark.conf.set("spark.sql.ansi.enabled", "true")
spark.sql("""select from_json('{"time":"26/10/2015"}', 'time Timestamp', map('timestampFormat', 'dd/MM/yyyy'));""").show
output:
Cannot parse the data type:
no viable alternative at input 'time'(line 1, pos 0)
== SQL ==
time Timestamp
^^^
{code}
But this query may accidentally succeed in certain cases cause the DataType parser sticks to the configs of the first created session in the current thread:
{code:scala}
DataType.fromDDL("time Timestamp")
val newSpark = spark.newSession()
newSpark.conf.set("spark.sql.ansi.enabled", "true")
newSpark.sql("""select from_json('{"time":"26/10/2015"}', 'time Timestamp', map('timestampFormat', 'dd/MM/yyyy'));""").show
output:
+--------------------------------+
|from_json({"time":"26/10/2015"})|
+--------------------------------+
| {2015-10-26 00:00...|
+--------------------------------+
{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org