You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nick Pentreath (JIRA)" <ji...@apache.org> on 2017/02/01 08:09:52 UTC

[jira] [Comment Edited] (SPARK-19208) MultivariateOnlineSummarizer performance optimization

    [ https://issues.apache.org/jira/browse/SPARK-19208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848108#comment-15848108 ] 

Nick Pentreath edited comment on SPARK-19208 at 2/1/17 8:09 AM:
----------------------------------------------------------------

Another option would be an "Estimator" like API, where the UDAF is purely internal to the API and not exposed to users, e.g.

{code}
val summarizer = new VectorSummarizer().setMetrics("min", "max").setInputCol("features").setWeightCol("weight")
val summary = summarizer.fit(df)

(or summarizer.evaluate, summarizer.summarize, etc?)

// this would need to throw exceptions (or perhaps return empty vectors) if the metric was not set
val min: Vector = summary.getMin

// OR DataFrame-based result:

val min: Vector = summary.select("min").as[Vector]
{code}

Agree it is important (and the point of this issue) to (a) only compute required metrics; and (b) not duplicate computation for efficiency.


was (Author: mlnick):
Another option would be an "Estimator" like API, where the UDAF is purely internal to the API and not exposed to users, e.g.

{code}
val summarizer = new VectorSummarizer().setMetrics("min", "max").setInputCol("features").setWeightCol("weight")
val summary = summarizer.fit(df)

(or summarizer.evaluate, summarizer.summarize, etc?)

// this would need to throw exceptions (or perhaps return empty vectors) if the metric was not set
val min: Vector = summary.getMin

// OR DataFrame-based result:

val min: Vector = summary.select("min").as[Vector]
{code}

> MultivariateOnlineSummarizer performance optimization
> -----------------------------------------------------
>
>                 Key: SPARK-19208
>                 URL: https://issues.apache.org/jira/browse/SPARK-19208
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>            Reporter: zhengruifeng
>         Attachments: Tests.pdf, WechatIMG2621.jpeg
>
>
> Now, {{MaxAbsScaler}} and {{MinMaxScaler}} are using {{MultivariateOnlineSummarizer}} to compute the min/max.
> However {{MultivariateOnlineSummarizer}} will also compute extra unused statistics. It slows down the task, moreover it is more prone to cause OOM.
> For example:
> env : --driver-memory 4G --executor-memory 1G --num-executors 4
> data: [http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#kdd2010%20(bridge%20to%20algebra)] 748401 instances,   and 29,890,095 features
> {{MaxAbsScaler.fit}} fails because of OOM
> {{MultivariateOnlineSummarizer}} maintains 8 arrays:
> {code}
> private var currMean: Array[Double] = _
>   private var currM2n: Array[Double] = _
>   private var currM2: Array[Double] = _
>   private var currL1: Array[Double] = _
>   private var totalCnt: Long = 0
>   private var totalWeightSum: Double = 0.0
>   private var weightSquareSum: Double = 0.0
>   private var weightSum: Array[Double] = _
>   private var nnz: Array[Long] = _
>   private var currMax: Array[Double] = _
>   private var currMin: Array[Double] = _
> {code}
> For {{MaxAbsScaler}}, only 1 array is needed (max of abs value)
> For {{MinMaxScaler}}, only 3 arrays are needed (max, min, nnz)
> After modication in the pr, the above example run successfully.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org