You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ya...@apache.org on 2020/06/10 07:33:48 UTC
[spark] branch branch-3.0 updated: [SPARK-26905][SQL] Add `TYPE` in
the ANSI non-reserved list
This is an automated email from the ASF dual-hosted git repository.
yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new 89b1d46 [SPARK-26905][SQL] Add `TYPE` in the ANSI non-reserved list
89b1d46 is described below
commit 89b1d4614ef1a3d15ff0f1e745c770ebd8f5cddb
Author: Takeshi Yamamuro <ya...@apache.org>
AuthorDate: Wed Jun 10 16:29:43 2020 +0900
[SPARK-26905][SQL] Add `TYPE` in the ANSI non-reserved list
### What changes were proposed in this pull request?
This PR intends to add `TYPE` in the ANSI non-reserved list because it is not reserved in the standard. See SPARK-26905 for a full set of the reserved/non-reserved keywords of `SQL:2016`.
Note: The current master behaviour is as follows;
```
scala> sql("SET spark.sql.ansi.enabled=false")
scala> sql("create table t1 (type int)")
res4: org.apache.spark.sql.DataFrame = []
scala> sql("SET spark.sql.ansi.enabled=true")
scala> sql("create table t2 (type int)")
org.apache.spark.sql.catalyst.parser.ParseException:
no viable alternative at input 'type'(line 1, pos 17)
== SQL ==
create table t2 (type int)
-----------------^^^
```
### Why are the changes needed?
To follow the ANSI/SQL standard.
### Does this PR introduce _any_ user-facing change?
Makes users use `TYPE` as identifiers.
### How was this patch tested?
Update the keyword lists in `TableIdentifierParserSuite`.
Closes #28773 from maropu/SPARK-26905.
Authored-by: Takeshi Yamamuro <ya...@apache.org>
Signed-off-by: Takeshi Yamamuro <ya...@apache.org>
(cherry picked from commit e14029b18df10db5094f8abf8b9874dbc9186b4e)
Signed-off-by: Takeshi Yamamuro <ya...@apache.org>
---
.../src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 | 1 +
.../apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala | 1 +
2 files changed, 2 insertions(+)
diff --git a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
index 2adaa9f..208a503 100644
--- a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
+++ b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
@@ -1153,6 +1153,7 @@ ansiNonReserved
| TRIM
| TRUE
| TRUNCATE
+ | TYPE
| UNARCHIVE
| UNBOUNDED
| UNCACHE
diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
index d5b0885..bd617bf 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
@@ -513,6 +513,7 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"transform",
"true",
"truncate",
+ "type",
"unarchive",
"unbounded",
"uncache",
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org