You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (Jira)" <ji...@apache.org> on 2019/12/26 00:35:00 UTC
[jira] [Updated] (SPARK-29702) Resolve group-by columns with
integrity constraints
[ https://issues.apache.org/jira/browse/SPARK-29702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Takeshi Yamamuro updated SPARK-29702:
-------------------------------------
Summary: Resolve group-by columns with integrity constraints (was: Resolve group-by columns with functional dependencies)
> Resolve group-by columns with integrity constraints
> ---------------------------------------------------
>
> Key: SPARK-29702
> URL: https://issues.apache.org/jira/browse/SPARK-29702
> Project: Spark
> Issue Type: Sub-task
> Components: SQL
> Affects Versions: 3.0.0
> Reporter: Takeshi Yamamuro
> Priority: Major
>
> In PgSQL, functional dependencies affect grouping column resolution in an analyzer;
> {code:java}
> postgres=# \d gstest3
> Table "public.gstest3"
> Column | Type | Collation | Nullable | Default
> --------+---------+-----------+----------+---------
> a | integer | | |
> b | integer | | |
> c | integer | | |
> d | integer | | |
> postgres=# select a, d, grouping(a,b,c) from gstest3 group by grouping sets ((a,b), (a,c));
> ERROR: column "gstest3.d" must appear in the GROUP BY clause or be used in an aggregate function
> LINE 1: select a, d, grouping(a,b,c) from gstest3 group by grouping ...
> ^
> postgres=# alter table gstest3 add primary key (a);
> ALTER TABLE
> postgres=# select a, d, grouping(a,b,c) from gstest3 group by grouping sets ((a,b), (a,c));
> a | d | grouping
> ---+---+----------
> 1 | 1 | 1
> 2 | 2 | 1
> 1 | 1 | 2
> 2 | 2 | 2
> (4 rows)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org