You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Joseph K. Bradley (JIRA)" <ji...@apache.org> on 2016/04/08 04:40:25 UTC
[jira] [Commented] (SPARK-14478) Should StandardScaler use biased
variance to scale?
[ https://issues.apache.org/jira/browse/SPARK-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15231532#comment-15231532 ]
Joseph K. Bradley commented on SPARK-14478:
-------------------------------------------
I'm listing this as "Major" priority since it is a behavioral change and would be good to decide before 2.0.
> Should StandardScaler use biased variance to scale?
> ---------------------------------------------------
>
> Key: SPARK-14478
> URL: https://issues.apache.org/jira/browse/SPARK-14478
> Project: Spark
> Issue Type: Question
> Components: ML, MLlib
> Reporter: Joseph K. Bradley
>
> Currently, MLlib's StandardScaler scales columns using the unbiased standard deviation. This matches what R's scale package does.
> However, it is a bit odd for 2 reasons:
> * Optimization/ML algorithms which require scaled columns generally assume unit variance (for mathematical convenience). That requires using biased variance.
> * scikit-learn, MLlib's GLMs, and R's glmnet package all use biased variance.
> *Question*: Should we switch to unbiased?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org