You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2016/09/20 19:57:20 UTC

[jira] [Created] (SPARK-17612) Support `DESCRIBE table PARTITION` SQL syntax

Dongjoon Hyun created SPARK-17612:
-------------------------------------

             Summary: Support `DESCRIBE table PARTITION` SQL syntax
                 Key: SPARK-17612
                 URL: https://issues.apache.org/jira/browse/SPARK-17612
             Project: Spark
          Issue Type: Bug
          Components: SQL
            Reporter: Dongjoon Hyun


This issue implements `DESC PARTITION` SQL Syntax again. It was dropped since Spark 2.0.0.

h4. Spark 2.0.0
{code}
scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY (c STRING, d STRING)")
res0: org.apache.spark.sql.DataFrame = []

scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
res1: org.apache.spark.sql.DataFrame = []

scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
org.apache.spark.sql.catalyst.parser.ParseException:
Unsupported SQL statement
== SQL ==
DESC partitioned_table PARTITION (c='Us', d=1)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:58)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
  at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:573)
  ... 48 elided
{code}

h4. Spark 1.6.2
{code}
scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY (c STRING, d STRING)")
res1: org.apache.spark.sql.DataFrame = [result: string]

scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
res2: org.apache.spark.sql.DataFrame = [result: string]

scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
16/09/20 12:48:36 WARN LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
+----------------------------------------------------------------+
|result                                                          |
+----------------------------------------------------------------+
|a                      string                                        |
|b                      int                                           |
|c                      string                                        |
|d                      string                                        |
|                                                                            |
|# Partition Information                                                      |
|# col_name             data_type               comment             |
|                                                                            |
|c                      string                                        |
|d                      string                                        |
+----------------------------------------------------------------+
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org