You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2019/02/22 23:42:00 UTC

[jira] [Resolved] (SPARK-26215) define reserved keywords after SQL standard

     [ https://issues.apache.org/jira/browse/SPARK-26215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Takeshi Yamamuro resolved SPARK-26215.
--------------------------------------
       Resolution: Fixed
         Assignee: Takeshi Yamamuro
    Fix Version/s: 3.0.0

Resolved by [https://github.com/apache/spark/pull/23259]

> define reserved keywords after SQL standard
> -------------------------------------------
>
>                 Key: SPARK-26215
>                 URL: https://issues.apache.org/jira/browse/SPARK-26215
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Wenchen Fan
>            Assignee: Takeshi Yamamuro
>            Priority: Major
>             Fix For: 3.0.0
>
>
> There are 2 kinds of SQL keywords: reserved and non-reserved. Reserved keywords can't be used as identifiers.
> In Spark SQL, we are too tolerant about non-reserved keywors. A lot of keywords are non-reserved and sometimes it cause ambiguity (IIRC we hit a problem when improving the INTERVAL syntax).
> I think it will be better to just follow other databases or SQL standard to define reserved keywords, so that we don't need to think very hard about how to avoid ambiguity.
> For reference: https://www.postgresql.org/docs/8.1/sql-keywords-appendix.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org