You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "luoyuxia (Jira)" <ji...@apache.org> on 2022/03/10 08:14:00 UTC

[jira] [Comment Edited] (SPARK-30212) COUNT(DISTINCT) window function should be supported

    [ https://issues.apache.org/jira/browse/SPARK-30212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17504073#comment-17504073 ] 

luoyuxia edited comment on SPARK-30212 at 3/10/22, 8:13 AM:
------------------------------------------------------------

 Hi,   Is it in the plan to support such function in Spark?


was (Author: luoyuxia):
 Hi,   Is it in the plan to support such fucntion in Spark?

> COUNT(DISTINCT) window function should be supported
> ---------------------------------------------------
>
>                 Key: SPARK-30212
>                 URL: https://issues.apache.org/jira/browse/SPARK-30212
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.1.0
>         Environment: Spark 2.4.4
> Scala 2.11.12
> Hive 2.3.6
>            Reporter: Kernel Force
>            Priority: Major
>
> Suppose we have a typical table in Hive like below:
> {code:sql}
> CREATE TABLE DEMO_COUNT_DISTINCT (
> demo_date string,
> demo_id string
> );
> {code}
> {noformat}
> +--------------------------------+------------------------------+
> | demo_count_distinct.demo_date | demo_count_distinct.demo_id |
> +--------------------------------+------------------------------+
> | 20180301 | 101 |
> | 20180301 | 102 |
> | 20180301 | 103 |
> | 20180401 | 201 |
> | 20180401 | 202 |
> +--------------------------------+------------------------------+
> {noformat}
> Now I want to count distinct number of DEMO_DATE but also reserve every columns' data in each row.
> So I use COUNT(DISTINCT) window function (which is also common in other mainstream databases like Oracle) in Hive beeline and it work:
> {code:sql}
> SELECT T.*, COUNT(DISTINCT T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES
>  FROM DEMO_COUNT_DISTINCT T;
> {code}
> {noformat}
> +--------------+------------+-------------+
> | t.demo_date | t.demo_id | uniq_dates |
> +--------------+------------+-------------+
> | 20180401 | 202 | 2 |
> | 20180401 | 201 | 2 |
> | 20180301 | 103 | 2 |
> | 20180301 | 102 | 2 |
> | 20180301 | 101 | 2 |
> +--------------+------------+-------------+
> {noformat}
> But when I came to SparkSQL, it threw exception even if I run the same SQL.
> {code:sql}
> spark.sql("""
> SELECT T.*, COUNT(DISTINCT T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES
>  FROM DEMO_COUNT_DISTINCT T
> """).show
> {code}
> {noformat}
> org.apache.spark.sql.AnalysisException: Distinct window functions are not supported: count(distinct DEMO_DATE#1) windowspecdefinition(null, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$()));;
> Project [demo_date#1, demo_id#2, UNIQ_DATES#0L]
> +- Project [demo_date#1, demo_id#2, UNIQ_DATES#0L, UNIQ_DATES#0L]
>  +- Window [count(distinct DEMO_DATE#1) windowspecdefinition(null, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$())) AS UNIQ_DATES#0L], [null]
>  +- Project [demo_date#1, demo_id#2]
>  +- SubqueryAlias `T`
>  +- SubqueryAlias `default`.`demo_count_distinct`
>  +- HiveTableRelation `default`.`demo_count_distinct`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [demo_date#1, demo_id#2]
> {noformat}
> Then I try to use countDistinct function but also got exceptions.
> {code:sql}
> spark.sql("""
> SELECT T.*, countDistinct(T.DEMO_DATE) OVER(PARTITION BY NULL) UNIQ_DATES
>  FROM DEMO_COUNT_DISTINCT T
> """).show
> {code}
> {noformat}
> org.apache.spark.sql.AnalysisException: Undefined function: 'countDistinct'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 12
>  at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$49.apply(Analyzer.scala:1279)
>  at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$49.apply(Analyzer.scala:1279)
>  at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
>  ......
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org