You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kevin Zhang (JIRA)" <ji...@apache.org> on 2018/02/23 17:42:00 UTC
[jira] [Updated] (SPARK-23498) Accuracy problem in comparison with
string and integer
[ https://issues.apache.org/jira/browse/SPARK-23498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Kevin Zhang updated SPARK-23498:
--------------------------------
Description:
While comparing a string column with integer value, spark sql will automatically cast the string operant to int, the following sql will return true in hive but false in spark
{code:java}
select '1000.1'>1000
{code}
from the physical plan we can see the string operant was cast to int which caused the accuracy loss
{code:java}
*Project [false AS (CAST(1000.1 AS INT) > 1000)#4]
+- Scan OneRowRelation[]
{code}
To solve it, using a wider common type like double to cast both sides of operant of a binary operator may be safe.
was:
While comparing a string column with integer value, spark sql will automatically cast the string operant to int, the following sql will return true in hive but false in spark
{code:java}
select '1000.1'>1000
{code}
from the physical plan we can see the string operant was cast to int which caused the accuracy loss
{code:java}
*Project [false AS (CAST(1000.1 AS INT) > 1000)#4]
+- Scan OneRowRelation[]
{code}
Similar to SPARK-22469, I think it's safe to use double a common type to cast both side of operants to.
> Accuracy problem in comparison with string and integer
> ------------------------------------------------------
>
> Key: SPARK-23498
> URL: https://issues.apache.org/jira/browse/SPARK-23498
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0, 2.2.1, 2.3.0
> Reporter: Kevin Zhang
> Priority: Major
>
> While comparing a string column with integer value, spark sql will automatically cast the string operant to int, the following sql will return true in hive but false in spark
>
> {code:java}
> select '1000.1'>1000
> {code}
>
> from the physical plan we can see the string operant was cast to int which caused the accuracy loss
> {code:java}
> *Project [false AS (CAST(1000.1 AS INT) > 1000)#4]
> +- Scan OneRowRelation[]
> {code}
> To solve it, using a wider common type like double to cast both sides of operant of a binary operator may be safe.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org