You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (Jira)" <ji...@apache.org> on 2019/11/01 05:13:00 UTC

[jira] [Created] (SPARK-29706) Support an empty grouping expression

Takeshi Yamamuro created SPARK-29706:
----------------------------------------

             Summary: Support an empty grouping expression
                 Key: SPARK-29706
                 URL: https://issues.apache.org/jira/browse/SPARK-29706
             Project: Spark
          Issue Type: Sub-task
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Takeshi Yamamuro


PgSQL can accept a query below with an empty grouping expr, but Spark cannot;
{code:java}
postgres=# create table gstest2 (a integer, b integer, c integer, d integer, e integer, f integer, g integer, h integer);
postgres=# insert into gstest2 values
postgres-#   (1, 1, 1, 1, 1, 1, 1, 1),
postgres-#   (1, 1, 1, 1, 1, 1, 1, 2),
postgres-#   (1, 1, 1, 1, 1, 1, 2, 2),
postgres-#   (1, 1, 1, 1, 1, 2, 2, 2),
postgres-#   (1, 1, 1, 1, 2, 2, 2, 2),
postgres-#   (1, 1, 1, 2, 2, 2, 2, 2),
postgres-#   (1, 1, 2, 2, 2, 2, 2, 2),
postgres-#   (1, 2, 2, 2, 2, 2, 2, 2),
postgres-#   (2, 2, 2, 2, 2, 2, 2, 2);
INSERT 0 9

postgres=# select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c;
 c | count 
---+-------
 f |      
 t |    18
(2 rows)
{code}
{code:java}
scala> sql("""select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c""").show
org.apache.spark.sql.catalyst.parser.ParseException:
no viable alternative at input '()'(line 1, pos 52)

== SQL ==
select v.c, (select count(*) from gstest2 group by () having v.c) from (values (false),(true)) v(c) order by v.c
----------------------------------------------------^^^

  at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:268)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:135)
  at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:85)
  at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:605)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:605)
  ... 47 elided
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org