You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David Vogelbacher (JIRA)" <ji...@apache.org> on 2018/07/29 21:56:00 UTC

[jira] [Comment Edited] (SPARK-24957) Decimal arithmetic can lead to wrong values using codegen

    [ https://issues.apache.org/jira/browse/SPARK-24957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16561277#comment-16561277 ] 

David Vogelbacher edited comment on SPARK-24957 at 7/29/18 9:55 PM:
--------------------------------------------------------------------

[~mgaido] thanks for putting up the PR!

I wasn't able to reproduce the incorrectness for the specific example I gave with wholestage codegen disabled:
{noformat}
scala> spark.conf.set("spark.sql.codegen.wholeStage", false)

scala> import org.apache.spark.sql.functions
import org.apache.spark.sql.functions

scala> val df = Seq(
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("11.9999999988")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("11.9999999988")),
     | ("a", BigDecimal("11.9999999988"))
     | ).toDF("text", "number")
df: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,18)]

scala> val df_grouped_1 = df.groupBy(df.col("text")).agg(functions.avg(df.col("number")).as("number"))
df_grouped_1: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]

scala> df_grouped_1.collect()
res1: Array[org.apache.spark.sql.Row] = Array([a,11.9999999994857142857143])

scala> val df_grouped_2 = df_grouped_1.groupBy(df_grouped_1.col("text")).agg(functions.sum(df_grouped_1.col("number")).as("number"))
df_grouped_2: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]

scala> df_grouped_2.collect()
res2: Array[org.apache.spark.sql.Row] = Array([a,11.9999999994857142857143])

scala> val df_total_sum = df_grouped_1.agg(functions.sum(df_grouped_1.col("number")).as("number"))
df_total_sum: org.apache.spark.sql.DataFrame = [number: decimal(38,22)]

scala> df_total_sum.collect()
res3: Array[org.apache.spark.sql.Row] = Array([11.9999999994857142857143])
{noformat}


was (Author: dvogelbacher):
[~mgaido] I wasn't able to reproduce the incorrectness for the specific example I gave with wholestage codegen disabled, that's what I meant:
{noformat}
scala> spark.conf.set("spark.sql.codegen.wholeStage", false)

scala> import org.apache.spark.sql.functions
import org.apache.spark.sql.functions

scala> val df = Seq(
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("11.9999999988")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("12.0")),
     | ("a", BigDecimal("11.9999999988")),
     | ("a", BigDecimal("11.9999999988"))
     | ).toDF("text", "number")
df: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,18)]

scala> val df_grouped_1 = df.groupBy(df.col("text")).agg(functions.avg(df.col("number")).as("number"))
df_grouped_1: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]

scala> df_grouped_1.collect()
res1: Array[org.apache.spark.sql.Row] = Array([a,11.9999999994857142857143])

scala> val df_grouped_2 = df_grouped_1.groupBy(df_grouped_1.col("text")).agg(functions.sum(df_grouped_1.col("number")).as("number"))
df_grouped_2: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]

scala> df_grouped_2.collect()
res2: Array[org.apache.spark.sql.Row] = Array([a,11.9999999994857142857143])

scala> val df_total_sum = df_grouped_1.agg(functions.sum(df_grouped_1.col("number")).as("number"))
df_total_sum: org.apache.spark.sql.DataFrame = [number: decimal(38,22)]

scala> df_total_sum.collect()
res3: Array[org.apache.spark.sql.Row] = Array([11.9999999994857142857143])
{noformat}

> Decimal arithmetic can lead to wrong values using codegen
> ---------------------------------------------------------
>
>                 Key: SPARK-24957
>                 URL: https://issues.apache.org/jira/browse/SPARK-24957
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: David Vogelbacher
>            Priority: Major
>
> I noticed a bug when doing arithmetic on a dataframe containing decimal values with codegen enabled.
> I tried to narrow it down on a small repro and got this (executed in spark-shell):
> {noformat}
> scala> val df = Seq(
>      | ("a", BigDecimal("12.0")),
>      | ("a", BigDecimal("12.0")),
>      | ("a", BigDecimal("11.9999999988")),
>      | ("a", BigDecimal("12.0")),
>      | ("a", BigDecimal("12.0")),
>      | ("a", BigDecimal("11.9999999988")),
>      | ("a", BigDecimal("11.9999999988"))
>      | ).toDF("text", "number")
> df: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,18)]
> scala> val df_grouped_1 = df.groupBy(df.col("text")).agg(functions.avg(df.col("number")).as("number"))
> df_grouped_1: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]
> scala> df_grouped_1.collect()
> res0: Array[org.apache.spark.sql.Row] = Array([a,11.9999999994857142857143])
> scala> val df_grouped_2 = df_grouped_1.groupBy(df_grouped_1.col("text")).agg(functions.sum(df_grouped_1.col("number")).as("number"))
> df_grouped_2: org.apache.spark.sql.DataFrame = [text: string, number: decimal(38,22)]
> scala> df_grouped_2.collect()
> res1: Array[org.apache.spark.sql.Row] = Array([a,1199999999948571.4285714285714285714286])
> scala> val df_total_sum = df_grouped_1.agg(functions.sum(df_grouped_1.col("number")).as("number"))
> df_total_sum: org.apache.spark.sql.DataFrame = [number: decimal(38,22)]
> scala> df_total_sum.collect()
> res2: Array[org.apache.spark.sql.Row] = Array([11.9999999994857142857143])
> {noformat}
> The results of {{df_grouped_1}} and {{df_total_sum}} are correct, whereas the result of {{df_grouped_2}} is clearly incorrect (it is the value of the correct result times {{10^14}}).
> When codegen is disabled all results are correct. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org