You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Wenchen Fan (JIRA)" <ji...@apache.org> on 2017/03/08 20:36:38 UTC
[jira] [Resolved] (SPARK-19727) Spark SQL round function modifies
original column
[ https://issues.apache.org/jira/browse/SPARK-19727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wenchen Fan resolved SPARK-19727.
---------------------------------
Resolution: Fixed
Fix Version/s: 2.2.0
Issue resolved by pull request 17075
[https://github.com/apache/spark/pull/17075]
> Spark SQL round function modifies original column
> -------------------------------------------------
>
> Key: SPARK-19727
> URL: https://issues.apache.org/jira/browse/SPARK-19727
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.1.0
> Reporter: SÅ‚awomir Bogutyn
> Priority: Minor
> Fix For: 2.2.0
>
>
> {code:java}
> import org.apache.spark.sql.functions
> case class MyRow(value : BigDecimal)
> val values = List(MyRow(BigDecimal.valueOf(1.23456789)))
> val dataFrame = spark.createDataFrame(values)
> dataFrame.show()
> dataFrame.withColumn("value_rounded", functions.round(dataFrame.col("value"))).show()
> {code}
> This produces output:
> {noformat}
> +--------------------+
> | value|
> +--------------------+
> |1.234567890000000000|
> +--------------------+
> +--------------------+-------------+
> | value|value_rounded|
> +--------------------+-------------+
> |1.000000000000000000| 1|
> +--------------------+-------------+
> {noformat}
> Same problem occurs when I use round function to filter dataFrame.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org