You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Joseph K. Bradley (JIRA)" <ji...@apache.org> on 2017/01/27 23:49:24 UTC

[jira] [Commented] (SPARK-19208) MultivariateOnlineSummarizer performance optimization

    [ https://issues.apache.org/jira/browse/SPARK-19208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15843670#comment-15843670 ] 

Joseph K. Bradley commented on SPARK-19208:
-------------------------------------------

Thanks for writing out your ideas.  Here are my thoughts about the API:

*Reference API: Double column stats*
When working with Double columns (not Vectors), one would expect write things like: {{myDataFrame.select(min("x"), max("x"))}} to select 2 stats, min and max.  Here, min and max are functions provided by Spark SQL which return columns.

*Analogy*
We should probably provide an analogous API.  Here's what I imagine:
{code}
import org.apache.spark.ml.stat.VectorSummary
val df: DataFrame = ...

val results: DataFrame = df.select(VectorSummary.min("features"), VectorSummary.mean("features"))
val weightedResults: DataFrame = df.select(VectorSummary.min("features"), VectorSummary.mean("features", "weight"))
// Both of these result DataFrames contain 2 Vector columns.
{code}

I.e., we provide vectorized versions of stats functions.

If you want to put everything into a single function, then we could also have VectorSummary have a function "summary" which returns a struct type with every stat available:
{code}
val results = df.select(VectorSummary.summary("features", "weights"))
// results DataFrame contains 1 struct column, which has a Vector field for every statistic we provide.
{code}

Note: I removed "online" from the name since it the user does not need to know that it does online aggregation.

> MultivariateOnlineSummarizer performance optimization
> -----------------------------------------------------
>
>                 Key: SPARK-19208
>                 URL: https://issues.apache.org/jira/browse/SPARK-19208
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>            Reporter: zhengruifeng
>         Attachments: Tests.pdf, WechatIMG2621.jpeg
>
>
> Now, {{MaxAbsScaler}} and {{MinMaxScaler}} are using {{MultivariateOnlineSummarizer}} to compute the min/max.
> However {{MultivariateOnlineSummarizer}} will also compute extra unused statistics. It slows down the task, moreover it is more prone to cause OOM.
> For example:
> env : --driver-memory 4G --executor-memory 1G --num-executors 4
> data: [http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#kdd2010%20(bridge%20to%20algebra)] 748401 instances,   and 29,890,095 features
> {{MaxAbsScaler.fit}} fails because of OOM
> {{MultivariateOnlineSummarizer}} maintains 8 arrays:
> {code}
> private var currMean: Array[Double] = _
>   private var currM2n: Array[Double] = _
>   private var currM2: Array[Double] = _
>   private var currL1: Array[Double] = _
>   private var totalCnt: Long = 0
>   private var totalWeightSum: Double = 0.0
>   private var weightSquareSum: Double = 0.0
>   private var weightSum: Array[Double] = _
>   private var nnz: Array[Long] = _
>   private var currMax: Array[Double] = _
>   private var currMin: Array[Double] = _
> {code}
> For {{MaxAbsScaler}}, only 1 array is needed (max of abs value)
> For {{MinMaxScaler}}, only 3 arrays are needed (max, min, nnz)
> After modication in the pr, the above example run successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org