You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nicholas Chammas (JIRA)" <ji...@apache.org> on 2017/03/14 14:39:41 UTC

[jira] [Comment Edited] (SPARK-19553) Add GroupedData.countApprox()

    [ https://issues.apache.org/jira/browse/SPARK-19553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870780#comment-15870780 ] 

Nicholas Chammas edited comment on SPARK-19553 at 3/14/17 2:38 PM:
-------------------------------------------------------------------

The utility of 1) would be being able to count items instead of distinct items, unless I misunderstood what you're saying. I would imagine that just counting items (as opposed to distinct items) would be cheaper, in addition to being semantically different.

-I'll open a PR for 3), unless someone else wants to step in and do that.-


was (Author: nchammas):
The utility of 1) would be being able to count items instead of distinct items, unless I misunderstood what you're saying. I would imagine that just counting items (as opposed to distinct items) would be cheaper, in addition to being semantically different.

I'll open a PR for 3), unless someone else wants to step in and do that.

> Add GroupedData.countApprox()
> -----------------------------
>
>                 Key: SPARK-19553
>                 URL: https://issues.apache.org/jira/browse/SPARK-19553
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Nicholas Chammas
>            Priority: Minor
>
> We already have a [{{pyspark.sql.functions.approx_count_distinct()}}|http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.approx_count_distinct] that can be applied to grouped data, but it seems odd that you can't just get regular approximate count for grouped data.
> I imagine the API would mirror that for [{{RDD.countApprox()}}|http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.countApprox], but I'm not sure:
> {code}
> (df
>     .groupBy('col1')
>     .countApprox(timeout=300, confidence=0.95)
>     .show())
> {code}
> Or, if we want to mirror the {{approx_count_distinct()}} function, we can do that too. I'd want to understand why that function doesn't take a timeout or confidence parameter, though. Also, what does {{rsd}} mean? It's not documented.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org