You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yanbo Liang (JIRA)" <ji...@apache.org> on 2016/08/19 09:02:22 UTC

[jira] [Comment Edited] (SPARK-17141) MinMaxScaler behaves weird when min and max have the same value and some values are NaN

    [ https://issues.apache.org/jira/browse/SPARK-17141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427860#comment-15427860 ] 

Yanbo Liang edited comment on SPARK-17141 at 8/19/16 9:01 AM:
--------------------------------------------------------------

In the existing code, {{MinMaxScaler}} handle NaN value indeterminately.
* If a column has identity value, that is max == min, {{MinMaxScalerModel}} transformation will output 0.5 for all rows even the original value is NaN.
* Otherwise, it will remain NaN after transformation.

I think we should unify the behavior by remaining NaN value at any condition, since we don't know how to transform a NaN value. In Python sklearn, it will throw exception when there is NaN in the dataset.


was (Author: yanboliang):
In the existing code, {{MinMaxScaler}} handle NaN value indeterminately.
* If a column has identity value, that is max == min, {{MinMaxScalerModel}} transformation will output 0.5 for all rows even the original value is NaN.
* Otherwise, it will remain NaN after transformation.
I think we should unify the behavior by remaining NaN value at any condition, since we don't know how to transform a NaN value. In Python sklearn, it will throw exception when there is NaN in the dataset.

> MinMaxScaler behaves weird when min and max have the same value and some values are NaN
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-17141
>                 URL: https://issues.apache.org/jira/browse/SPARK-17141
>             Project: Spark
>          Issue Type: Bug
>          Components: ML
>    Affects Versions: 1.6.2, 2.0.0
>         Environment: Databrick's Community, Spark 2.0 + Scala 2.10
>            Reporter: Alberto Bonsanto
>            Priority: Minor
>
> When you have a {{DataFrame}} with a column named {{features}}, which is a {{DenseVector}} and the *maximum* and *minimum* and some values are {{Double.NaN}} they get replaced by 0.5, and they should remain with the same value, I believe.
> I know how to fix it, but I haven't ever made a pull request. You can check the bug in this [notebook|https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/2485090270202665/3126465289264547/8589256059752547/latest.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org