You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tim Armstrong (Jira)" <ji...@apache.org> on 2021/04/23 21:50:00 UTC
[jira] [Updated] (SPARK-35207) hash() and other hash builtins do
not normalize negative zero
[ https://issues.apache.org/jira/browse/SPARK-35207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tim Armstrong updated SPARK-35207:
----------------------------------
Description:
I would generally expect that {{x = y => hash(x) = hash(y)}}. However +-0 hash to different values for floating point types.
{noformat}
scala> spark.sql("select hash(cast('0.0' as double)), hash(cast('-0.0' as double))").show
+-------------------------+--------------------------+
|hash(CAST(0.0 AS DOUBLE))|hash(CAST(-0.0 AS DOUBLE))|
+-------------------------+--------------------------+
| -1670924195| -853646085|
+-------------------------+--------------------------+
scala> spark.sql("select cast('0.0' as double) == cast('-0.0' as double)").show
+--------------------------------------------+
|(CAST(0.0 AS DOUBLE) = CAST(-0.0 AS DOUBLE))|
+--------------------------------------------+
| true|
+--------------------------------------------+
{noformat}
I'm not sure how likely this is to cause issues in practice, since only a limited number of calculations can produce -0 and joining or aggregating with floating point keys is a bad practice as a general rule, but I think it would be safer if we normalised -0.0 to +0.0.
was:
I would generally expect that x = y => hash(x) = hash(y). However +-0 hash to different values for floating point types.
{noformat}
scala> spark.sql("select hash(cast('0.0' as double)), hash(cast('-0.0' as double))").show
+-------------------------+--------------------------+
|hash(CAST(0.0 AS DOUBLE))|hash(CAST(-0.0 AS DOUBLE))|
+-------------------------+--------------------------+
| -1670924195| -853646085|
+-------------------------+--------------------------+
scala> spark.sql("select cast('0.0' as double) == cast('-0.0' as double)").show
+--------------------------------------------+
|(CAST(0.0 AS DOUBLE) = CAST(-0.0 AS DOUBLE))|
+--------------------------------------------+
| true|
+--------------------------------------------+
{noformat}
I'm not sure how likely this is to cause issues in practice, since only a limited number of calculations can produce -0 and joining or aggregating with floating point keys is a bad practice as a general rule, but I think it would be safer if we normalised -0.0 to +0.0.
> hash() and other hash builtins do not normalize negative zero
> -------------------------------------------------------------
>
> Key: SPARK-35207
> URL: https://issues.apache.org/jira/browse/SPARK-35207
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.1.1
> Reporter: Tim Armstrong
> Priority: Major
> Labels: correctness
>
> I would generally expect that {{x = y => hash(x) = hash(y)}}. However +-0 hash to different values for floating point types.
> {noformat}
> scala> spark.sql("select hash(cast('0.0' as double)), hash(cast('-0.0' as double))").show
> +-------------------------+--------------------------+
> |hash(CAST(0.0 AS DOUBLE))|hash(CAST(-0.0 AS DOUBLE))|
> +-------------------------+--------------------------+
> | -1670924195| -853646085|
> +-------------------------+--------------------------+
> scala> spark.sql("select cast('0.0' as double) == cast('-0.0' as double)").show
> +--------------------------------------------+
> |(CAST(0.0 AS DOUBLE) = CAST(-0.0 AS DOUBLE))|
> +--------------------------------------------+
> | true|
> +--------------------------------------------+
> {noformat}
> I'm not sure how likely this is to cause issues in practice, since only a limited number of calculations can produce -0 and joining or aggregating with floating point keys is a bad practice as a general rule, but I think it would be safer if we normalised -0.0 to +0.0.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org