You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2023/03/15 01:22:00 UTC

[jira] [Assigned] (SPARK-42775) approx_percentile produces wrong results for large decimals.

     [ https://issues.apache.org/jira/browse/SPARK-42775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-42775:
------------------------------------

    Assignee:     (was: Apache Spark)

> approx_percentile produces wrong results for large decimals.
> ------------------------------------------------------------
>
>                 Key: SPARK-42775
>                 URL: https://issues.apache.org/jira/browse/SPARK-42775
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0, 2.2.0, 2.3.0, 2.4.0, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0
>            Reporter: Chenhao Li
>            Priority: Major
>
> In the {{approx_percentile}} expression, Spark casts decimal to double to update the aggregation state ([ApproximatePercentile.scala#L181|https://github.com/apache/spark/blob/933dc0c42f0caf74aaa077fd4f2c2e7208452b9b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproximatePercentile.scala#L181]) and casts the result double back to decimal ([ApproximatePercentile.scala#L206|https://github.com/apache/spark/blob/933dc0c42f0caf74aaa077fd4f2c2e7208452b9b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproximatePercentile.scala#L206]). The precision loss in the casts can make the result decimal out of its precision range. This can lead to the following counter-intuitive results:
> {code:sql}
> spark-sql> select approx_percentile(col, 0.5) from values (9999999999999999999) as tab(col);
> NULL
> spark-sql> select approx_percentile(col, 0.5) is null from values (9999999999999999999) as tab(col);
> false
> spark-sql> select cast(approx_percentile(col, 0.5) as string) from values (9999999999999999999) as tab(col);
> 10000000000000000000
> spark-sql> desc select approx_percentile(col, 0.5) from values (9999999999999999999) as tab(col);
> approx_percentile(col, 0.5, 10000)	decimal(19,0) 
> {code}
> The result is actually not null, so the second query returns false. The first query returns null because the result cannot fit into {{{}decimal(19, 0){}}}.
> A suggested fix is to use {{Decimal.changePrecision}} here to ensure the result fits, and really returns a null or throws an exception when the result doesn't fit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org