You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "John Ayad (Jira)" <ji...@apache.org> on 2019/11/29 17:00:00 UTC
[jira] [Commented] (SPARK-30082) Zeros are being treated as NaNs
[ https://issues.apache.org/jira/browse/SPARK-30082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985118#comment-16985118 ]
John Ayad commented on SPARK-30082:
-----------------------------------
Just thought i'd update on this, the {{replace}} function seems to be, correctly, replacing {{NaN}}s. Here's a better example that also demonstrates that the problem is limited to columns of type {{Integer}}:
{code:java}
>>> df = spark.createDataFrame([(1.0, 0), (0.0, 3), (float('nan'), 0)], ("index", "value"))
>>> df.show()
+-----+-----+
|index|value|
+-----+-----+
| 1.0| 0|
| 0.0| 3|
| NaN| 0|
+-----+-----+>>> df.replace(float('nan'), 2).show()
+-----+-----+
|index|value|
+-----+-----+
| 1.0| 2|
| 0.0| 3|
| 2.0| 2|
+-----+-----+ {code}
> Zeros are being treated as NaNs
> -------------------------------
>
> Key: SPARK-30082
> URL: https://issues.apache.org/jira/browse/SPARK-30082
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.4.4
> Reporter: John Ayad
> Priority: Critical
>
> If you attempt to run
> {code:java}
> df = df.replace(float('nan'), somethingToReplaceWith)
> {code}
> It will replace all {{0}} s in columns of type {{Integer}}
> Example code snippet to repro this:
> {code:java}
> from pyspark.sql import SQLContext
> spark = SQLContext(sc).sparkSession
> df = spark.createDataFrame([(1, 0), (2, 3), (3, 0)], ("index", "value"))
> df.show()
> df = df.replace(float('nan'), 5)
> df.show()
> {code}
> Here's the output I get when I run this code:
> {code:java}
> Welcome to
> ____ __
> / __/__ ___ _____/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /__ / .__/\_,_/_/ /_/\_\ version 2.4.4
> /_/
> Using Python version 3.7.5 (default, Nov 1 2019 02:16:32)
> SparkSession available as 'spark'.
> >>> from pyspark.sql import SQLContext
> >>> spark = SQLContext(sc).sparkSession
> >>> df = spark.createDataFrame([(1, 0), (2, 3), (3, 0)], ("index", "value"))
> >>> df.show()
> +-----+-----+
> |index|value|
> +-----+-----+
> | 1| 0|
> | 2| 3|
> | 3| 0|
> +-----+-----+
> >>> df = df.replace(float('nan'), 5)
> >>> df.show()
> +-----+-----+
> |index|value|
> +-----+-----+
> | 1| 5|
> | 2| 3|
> | 3| 5|
> +-----+-----+
> >>>
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org