You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2020/01/25 09:53:00 UTC

[jira] [Updated] (SPARK-26021) -0.0 and 0.0 not treated consistently, doesn't match Hive

     [ https://issues.apache.org/jira/browse/SPARK-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun updated SPARK-26021:
----------------------------------
    Target Version/s: 3.0.0

> -0.0 and 0.0 not treated consistently, doesn't match Hive
> ---------------------------------------------------------
>
>                 Key: SPARK-26021
>                 URL: https://issues.apache.org/jira/browse/SPARK-26021
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Sean R. Owen
>            Assignee: Alon Doron
>            Priority: Critical
>              Labels: correctness
>             Fix For: 3.0.0
>
>
> Per [~adoron] and [~mccheah] and SPARK-24834, I'm splitting this out as a new issue:
> The underlying issue is how Spark and Hive treat 0.0 and -0.0, which are numerically identical but not the same double value:
> In hive, 0.0 and -0.0 are equal since https://issues.apache.org/jira/browse/HIVE-11174.
>  That's not the case with spark sql as "group by" (non-codegen) treats them as different values. Since their hash is different they're put in different buckets of UnsafeFixedWidthAggregationMap.
> In addition there's an inconsistency when using the codegen, for example the following unit test:
> {code:java}
> println(Seq(0.0d, 0.0d, -0.0d).toDF("i").groupBy("i").count().collect().mkString(", "))
> {code}
> [0.0,3]
> {code:java}
> println(Seq(0.0d, -0.0d, 0.0d).toDF("i").groupBy("i").count().collect().mkString(", "))
> {code}
> [0.0,1], [-0.0,2]
> {code:java}
> spark.conf.set("spark.sql.codegen.wholeStage", "false")
> println(Seq(0.0d, -0.0d, 0.0d).toDF("i").groupBy("i").count().collect().mkString(", "))
> {code}
> [0.0,2], [-0.0,1]
> Note that the only difference between the first 2 lines is the order of the elements in the Seq.
>  This inconsistency is resulted by different partitioning of the Seq and the usage of the generated fast hash map in the first, partial, aggregation.
> It looks like we need to add a specific check for -0.0 before hashing (both in codegen and non-codegen modes) if we want to fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org