You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Zhenhua Wang (JIRA)" <ji...@apache.org> on 2015/10/29 07:44:27 UTC

[jira] [Updated] (SPARK-11398) misleading dialect conf at the start of spark-sql

     [ https://issues.apache.org/jira/browse/SPARK-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenhua Wang updated SPARK-11398:
---------------------------------
    Description: 
When we start bin/spark-sql, the default context is HiveContext, and the corresponding dialect is hiveql.
However, if we type "set spark.sql.dialect;", the result is "sql", which is inconsistent with the actual dialect and is misleading. For example, we can create tables which is only allowed in hiveql, but this dialect conf shows it's "sql".
Although this problem will not cause any execution error, it's misleading to spark sql users. Therefore I think we should fix it.

  was:
When we start bin/spark-sql, the default context is HiveContext, and the corresponding dialect is hiveql.
However, if we type "set spark.sql.dialect;", the result is "sql", which is inconsistent with the actual dialect and is misleading. For example, we can create tables which is only allowed in hiveql, but this dialect conf shows it's "sql".
Although This problem will not cause any execution error, it's misleading to spark sql users. Therefore I think we should fix it.


> misleading dialect conf at the start of spark-sql
> -------------------------------------------------
>
>                 Key: SPARK-11398
>                 URL: https://issues.apache.org/jira/browse/SPARK-11398
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Zhenhua Wang
>            Priority: Minor
>
> When we start bin/spark-sql, the default context is HiveContext, and the corresponding dialect is hiveql.
> However, if we type "set spark.sql.dialect;", the result is "sql", which is inconsistent with the actual dialect and is misleading. For example, we can create tables which is only allowed in hiveql, but this dialect conf shows it's "sql".
> Although this problem will not cause any execution error, it's misleading to spark sql users. Therefore I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org