You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/08/14 10:47:46 UTC

[jira] [Commented] (SPARK-9971) MaxFunction not working correctly with columns containing Double.NaN

    [ https://issues.apache.org/jira/browse/SPARK-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696716#comment-14696716 ] 

Sean Owen commented on SPARK-9971:
----------------------------------

My instinct is that this should in fact result in NaN; NaN is not in general ignored. For example {{math.max(1.0, Double.NaN)}} and {{math.min(1.0, Double.NaN)}} are both NaN.

But are you ready for some weird?

{code}
scala> Seq(1.0, Double.NaN).max
res23: Double = NaN

scala> Seq(Double.NaN, 1.0).max
res24: Double = 1.0

scala> Seq(5.0, Double.NaN, 1.0).max
res25: Double = 1.0

scala> Seq(5.0, Double.NaN, 1.0, 6.0).max
res26: Double = 6.0

scala> Seq(5.0, Double.NaN, 1.0, 6.0, Double.NaN).max
res27: Double = NaN
{code}

Not sure what to make of that, other than the Scala collection library isn't a good reference for behavior. Java?

{code}
scala> java.util.Collections.max(java.util.Arrays.asList(new java.lang.Double(1.0), new java.lang.Double(Double.NaN)))
res33: Double = NaN

scala> java.util.Collections.max(java.util.Arrays.asList(new java.lang.Double(Double.NaN), new java.lang.Double(1.0)))
res34: Double = NaN
{code}

Makes more sense at least. I think this is correct behavior and you would filter NaN if you want to ignore them, as it is generally not something the language ignores.

> MaxFunction not working correctly with columns containing Double.NaN
> --------------------------------------------------------------------
>
>                 Key: SPARK-9971
>                 URL: https://issues.apache.org/jira/browse/SPARK-9971
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: Frank Rosner
>            Priority: Minor
>
> h4. Problem Description
> When using the {{max}} function on a {{DoubleType}} column that contains {{Double.NaN}} values, the returned maximum value will be {{Double.NaN}}. 
> This is because it compares all values with the running maximum. However, {{x < Double.NaN}} will always lead false for all {{x: Double}}, so will {{x > Double.NaN}}.
> h4. How to Reproduce
> {code}
> import org.apache.spark.sql.{SQLContext, Row}
> import org.apache.spark.sql.types._
> val sql = new SQLContext(sc)
> val rdd = sc.makeRDD(List(Row(Double.NaN), Row(-10d), Row(0d)))
> val dataFrame = sql.createDataFrame(rdd, StructType(List(
>   StructField("col", DoubleType, false)
> )))
> dataFrame.select(max("col")).first
> // returns org.apache.spark.sql.Row = [NaN]
> {code}
> h4. Solution
> The {{max}} and {{min}} functions should ignore NaN values, as they are not numbers. If a column contains only NaN values, then the maximum and minimum is not defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org