You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lovasoa (JIRA)" <ji...@apache.org> on 2017/06/11 22:44:20 UTC

[jira] [Comment Edited] (SPARK-21057) Do not use a PascalDistribution in countApprox

    [ https://issues.apache.org/jira/browse/SPARK-21057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16046145#comment-16046145 ] 

Lovasoa edited comment on SPARK-21057 at 6/11/17 10:44 PM:
-----------------------------------------------------------

{quote}
The hypothetical is that you pick elements from the total data set at random, and a fraction p of the time you 'succeed ' in picking from among the elements already counted.
The rest of the time you 'fail'. But if this goes on long enough to reach the observed count as the number of successes, then the number of failures models the size of the rest of the data.
{quote}

The thing is, if you pick elements until you've picked all the ones that were counted, and don't want to pick the same element twice, then the probability of picking a 'counted' element is not constant. The more you pick counted elements, the less the probability to pick a counted element next time. 

If you don't care about picking the same element twice, then you may well pick every time the same counted element and stop.

This model doesn't model the random process we are trying to describe. The fact it has a correct expected value (and thus doesn’t give nonsensical results) doesn't mean anything. Most probability laws could be made to fit our expected value.

I currently don’t have a lot of free time as I'm finishing my master thesis, but I will make a pull request as soon as I find the time. Thank you for your quick answer !


was (Author: lovasoa):
{quote}
The hypothetical is that you pick elements from the total data set at random, and a fraction p of the time you 'succeed ' in picking from among the elements already counted.
The rest of the time you 'fail'. But if this goes on long enough to reach the observed count as the number of successes, then the number of failures models the size of the rest of the data.
{quote}

The thing is, if you pick elements until you've picked all the ones that were counted, and don't want to pick the same element twice, then the probability of picking a 'counted' element is not constant. The more you pick counted elements, the less the probability to pick a counted element next time. 

If you don't care about picking the same element twice, then you may well pick every time the same counted element and stop.

This model doesn't model the random process we are trying to describe. The fact it has a correct expected value (and thus doesn’t give nonsensical results) doesn't mean anything. Most probability laws could be made to fit our expected value.

I currently don’t have a lot of free time as I'm finishing my master thesis, but I will make a pull request as soon as I find the time.

> Do not use a PascalDistribution in countApprox
> ----------------------------------------------
>
>                 Key: SPARK-21057
>                 URL: https://issues.apache.org/jira/browse/SPARK-21057
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Lovasoa
>
> I was reading the source of Spark, and found this:
> https://github.com/apache/spark/blob/v2.1.1/core/src/main/scala/org/apache/spark/partial/CountEvaluator.scala#L50-L72
> This is the function that estimates the probability distribution of the total count of elements in an RDD given the count of only some partitions.
> This function does a strange thing: when the number of elements counted so far is less than 10 000, it models the total count with a negative binomial (Pascal) law, else, it models it with a Poisson law.
> Modeling our number of uncounted elements with a negative binomial law is like saying that we ran over elements, counting only some, and stopping after having counted a given number of elements.
> But this does not model what really happened.  Our counting was limited in time, not in number of counted elements, and we can't count only some of the elements in a partition.
> I propose to use the Poisson distribution in every case, as it can be justified under the hypothesis that the number of elements in each partition is independent and follows a Poisson law.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org