You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2016/11/24 14:02:58 UTC
[jira] [Commented] (SPARK-18577) Ambiguous reference with duplicate
column names in aggregate
[ https://issues.apache.org/jira/browse/SPARK-18577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15693356#comment-15693356 ]
Takeshi Yamamuro commented on SPARK-18577:
------------------------------------------
I reproduced this in master though, do we need to resolve this reference?
For example, postgresql cannot resolve the same query;
{code}
postgres=# CREATE TABLE t(id INT, name VARCHAR, rank FLOAT8);
CREATE TABLE
postgres=# \d t
Table "public.t"
Column | Type | Modifiers
--------+-------------------+-----------
id | integer |
name | character varying |
rank | double precision |
postgres=# INSERT INTO t values(1, 'xxx', 1.0);
INSERT 0 1
postgres=# SELECT * FROM t;
id | name | rank
----+------+------
1 | xxx | 1
(1 row)
postgres=# SELECT id, COUNT(*) FROM t t1 JOIN t t2 ON t1.name = t2.name GROUP BY t1.id;
ERROR: column reference "id" is ambiguous at character 8
{code}
> Ambiguous reference with duplicate column names in aggregate
> ------------------------------------------------------------
>
> Key: SPARK-18577
> URL: https://issues.apache.org/jira/browse/SPARK-18577
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.6.2
> Reporter: Yerui Sun
>
> Assuming we have a table 't' with 3 columns 'id', 'name' and 'rank', and here's the sql to re-produce issue:
> {code}
> select id, count(*) from t t1 join t t2 on t1.name = t2.name group by t1.id
> {code}
> The error message is:
> {code}
> Reference 'id' is ambiguous, could be: id#3, id#9.; line 1 pos 7
> {code}
> The sql can be parsed in Hive, since the select 'id' reference can be resolved to 't1.id', which presented in group expressions.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org