You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2019/04/27 02:08:00 UTC

[jira] [Updated] (SPARK-27581) DataFrame countDistinct("*") fails with AnalysisException: "Invalid usage of '*' in expression 'count'"

     [ https://issues.apache.org/jira/browse/SPARK-27581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Josh Rosen updated SPARK-27581:
-------------------------------
    Description: 
If I have a DataFrame then I can use {{count("*")}} as an expression, e.g.:

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(count("*")).first()
{code}

However, if I try to do the same thing with {{countDistinct}} I get an error:

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(countDistinct("*")).first()

org.apache.spark.sql.AnalysisException: Invalid usage of '*' in expression 'count';
{code}

As a workaround, I need to use {{expr}}, e.g.

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(expr("count(distinct(*))")).first()
{code}

You might be wondering "why not just use {{df.count()}} or {{df.distinct().count()}}?"  but in my case I'd ultimately to compute both counts as part of the same aggregation, e.g.

{code}
val cnt, distinctCnt = df.select(count("*"), countDistinct("*)).as[(Long, Long)].first()
{code}

I'm reporting this because it's a minor usability annoyance / surprise for inexperienced Spark users.

  was:
If I have a DataFrame then I can use {{count("*")}} as an expression, e.g.:

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(count("*")).first()
{code}

However, if I try to do the same thing with {{countDistinct}} I get an error:

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(countDistinct("*")).first()

org.apache.spark.sql.AnalysisException: Invalid usage of '*' in expression 'count';
{code}

As a workaround, I need to use {{expr}}, e.g.

{code}
import org.apache.spark.sql.functions._
val df = sql("select id % 100 from range(100000)")
df.select(expr("count(distinct(*))")).first()
{code}

You might be wondering "why not just use {{df.count()}} or {{df.distinct().count()}}?"  but in my case I'd ultimately to compute both counts as part of the same aggregation, e.g.

{code}
val cnt, distinctCnt = df.select(count("*"), countDistinct("*)).as[(Long, Long)].first()
{code}


> DataFrame countDistinct("*") fails with AnalysisException: "Invalid usage of '*' in expression 'count'"
> -------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27581
>                 URL: https://issues.apache.org/jira/browse/SPARK-27581
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Josh Rosen
>            Priority: Major
>
> If I have a DataFrame then I can use {{count("*")}} as an expression, e.g.:
> {code}
> import org.apache.spark.sql.functions._
> val df = sql("select id % 100 from range(100000)")
> df.select(count("*")).first()
> {code}
> However, if I try to do the same thing with {{countDistinct}} I get an error:
> {code}
> import org.apache.spark.sql.functions._
> val df = sql("select id % 100 from range(100000)")
> df.select(countDistinct("*")).first()
> org.apache.spark.sql.AnalysisException: Invalid usage of '*' in expression 'count';
> {code}
> As a workaround, I need to use {{expr}}, e.g.
> {code}
> import org.apache.spark.sql.functions._
> val df = sql("select id % 100 from range(100000)")
> df.select(expr("count(distinct(*))")).first()
> {code}
> You might be wondering "why not just use {{df.count()}} or {{df.distinct().count()}}?"  but in my case I'd ultimately to compute both counts as part of the same aggregation, e.g.
> {code}
> val cnt, distinctCnt = df.select(count("*"), countDistinct("*)).as[(Long, Long)].first()
> {code}
> I'm reporting this because it's a minor usability annoyance / surprise for inexperienced Spark users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org