You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by rzykov <rz...@gmail.com> on 2014/09/12 08:46:51 UTC

Re: Computing mean and standard deviation by key

Is it possible to use  DoubleRDDFunctions
<https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/rdd/DoubleRDDFunctions.html>  
for calculating mean and std dev for Paired RDDs (key, value)?

Now I'm using an approach with ReduceByKey but want to make my code more
concise and readable.





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14062.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Computing mean and standard deviation by key

Posted by Sean Owen <so...@cloudera.com>.
These functions operate on an RDD of Double which is not what you have, so
no this is not a way to use DoubleRDDFunctions. See earlier in the thread
for canonical solutions.
On Sep 12, 2014 8:06 AM, "rzykov" <rz...@gmail.com> wrote:

> Tried this:
>
> ordersRDD.join(ordersRDD).map{case((partnerid, itemid),((matchedida,
> pricea), (matchedidb, priceb))) => ((matchedida, matchedidb), (if(priceb >
> 0) (pricea/priceb).toDouble else 0.toDouble))}
>         .groupByKey
>         .values.stats
>         .first
>
> Error:
> <console>:37: error: could not find implicit value for parameter num:
> Numeric[Iterable[Double]]
>                       .values.stats
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14065.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Computing mean and standard deviation by key

Posted by rzykov <rz...@gmail.com>.
Thank you, David!
It works.

import org.apache.spark.util.StatCounter

val a = ordersRDD.join(ordersRDD).map{case((partnerid, itemid),((matchedida,
pricea), (matchedidb, priceb))) => ((matchedida, matchedidb), (if(priceb >
0) (pricea/priceb).toDouble else 0.toDouble))}
        .groupByKey
        .mapValues( value => org.apache.spark.util.StatCounter(value))
        .take(5)
        .foreach(println)

output:

((2383,2465),(count: 4, mean: 0.883642, stdev: 0.086068, max: 0.933333, min:
0.734568))
((2600,6786),(count: 4, mean: 2.388889, stdev: 0.559094, max: 3.148148, min:
1.574074))
((2375,2606),(count: 6, mean: 0.693981, stdev: 0.305744, max: 1.125000, min:
0.453704))
((6780,2475),(count: 2, mean: 0.827549, stdev: 0.150991, max: 0.978541, min:
0.676558))
((2475,2606),(count: 7, mean: 3.975737, stdev: 3.356274, max: 9.628572, min:
0.472222))



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14068.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Computing mean and standard deviation by key

Posted by David Rowe <da...@gmail.com>.
Oh I see, I think you're trying to do something like (in SQL):

SELECT order, mean(price) FROM orders GROUP BY order

In this case, I'm not aware of a way to use the DoubleRDDFunctions, since
you have a single RDD of pairs where each pair is of type (KeyType,
Iterable[Double]).

It seems to me that you want to write a function:

def stats(numList: Iterable[Double]): org.apache.spark.util.StatCounter

and then use

pairRdd.mapValues( value => stats(value) )




On Fri, Sep 12, 2014 at 5:05 PM, rzykov <rz...@gmail.com> wrote:

> Tried this:
>
> ordersRDD.join(ordersRDD).map{case((partnerid, itemid),((matchedida,
> pricea), (matchedidb, priceb))) => ((matchedida, matchedidb), (if(priceb >
> 0) (pricea/priceb).toDouble else 0.toDouble))}
>         .groupByKey
>         .values.stats
>         .first
>
> Error:
> <console>:37: error: could not find implicit value for parameter num:
> Numeric[Iterable[Double]]
>                       .values.stats
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14065.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Computing mean and standard deviation by key

Posted by rzykov <rz...@gmail.com>.
Tried this:

ordersRDD.join(ordersRDD).map{case((partnerid, itemid),((matchedida,
pricea), (matchedidb, priceb))) => ((matchedida, matchedidb), (if(priceb >
0) (pricea/priceb).toDouble else 0.toDouble))}
        .groupByKey
        .values.stats
        .first

Error:
<console>:37: error: could not find implicit value for parameter num:
Numeric[Iterable[Double]]
                      .values.stats





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14065.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Computing mean and standard deviation by key

Posted by David Rowe <da...@gmail.com>.
I generally call values.stats, e.g.:

val stats = myPairRdd.values.stats

On Fri, Sep 12, 2014 at 4:46 PM, rzykov <rz...@gmail.com> wrote:

> Is it possible to use  DoubleRDDFunctions
> <
> https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/rdd/DoubleRDDFunctions.html
> >
> for calculating mean and std dev for Paired RDDs (key, value)?
>
> Now I'm using an approach with ReduceByKey but want to make my code more
> concise and readable.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Computing-mean-and-standard-deviation-by-key-tp11192p14062.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>