You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kevin Zhang (JIRA)" <ji...@apache.org> on 2018/02/23 17:38:00 UTC

[jira] [Created] (SPARK-23498) Accuracy problem in comparison with string and integer

Kevin Zhang created SPARK-23498:
-----------------------------------

             Summary: Accuracy problem in comparison with string and integer
                 Key: SPARK-23498
                 URL: https://issues.apache.org/jira/browse/SPARK-23498
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.1, 2.2.0, 2.3.0
            Reporter: Kevin Zhang


While comparing a string column with integer value, spark sql will automatically cast the string operant to int, the following sql will return true in hive but false in spark

 
{code:java}
select '1000.1'>1000
{code}
 

 from the physical plan we can see the string operant was cast to int which caused the accuracy loss
{code:java}
*Project [false AS (CAST(1000.1 AS INT) > 1000)#4]

+- Scan OneRowRelation[]
{code}
Similar to SPARK-22469, I think it's safe to use double a common type to cast both side of operants to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org