You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (JIRA)" <ji...@apache.org> on 2017/08/05 09:18:00 UTC

[jira] [Updated] (SPARK-21646) BinaryComparison shouldn't auto cast string to int/long

     [ https://issues.apache.org/jira/browse/SPARK-21646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yuming Wang updated SPARK-21646:
--------------------------------
    Description: 
How to reproduce:
Hive:
{code:sql}
$ hive -S
hive> create table spark_21646(c1 string, c2 string);
hive> insert into spark_21646 values('92233720368547758071', 'a');
hive> insert into spark_21646 values('21474836471', 'b');
hive> insert into spark_21646 values('10', 'c');
hive> select * from spark_21646 where c1 > 0;
92233720368547758071	a
10	c
21474836471	b
hive>
{code}

{code:sql}
$ spark-sql -S
spark-sql> select * from spark_21646 where c1 > 0;
10      c                                                                       
spark-sql> select * from spark_21646 where c1 > 0L;
21474836471	b
10	c
spark-sql> explain select * from spark_21646 where c1 > 0;
== Physical Plan ==
*Project [c1#14, c2#15]
+- *Filter (isnotnull(c1#14) && (cast(c1#14 as int) > 0))
   +- *FileScan parquet spark_21646[c1#14,c2#15] Batched: true, Format: Parquet, Location: InMemoryFileIndex[viewfs://cluster4/user/hive/warehouse/spark_21646], PartitionFilters: [], PushedFilters: [IsNotNull(c1)], ReadSchema: struct<c1:string,c2:string>
spark-sql> 
{code}

As you can see, spark auto cast c1 to int type, if this value out of integer range, the result is different from Hive.

  was:
Hive:
{code:sql}
$ hive -S
hive> create table tmp.wym_spark_123(c1 string, c2 string);
hive> insert into tmp.wym_spark_123 values('92233720368547758071', 'a');
hive> insert into tmp.wym_spark_123 values('21474836471', 'b');
hive> insert into tmp.wym_spark_123 values('10', 'c');
hive> select * from tmp.wym_spark_123 where c1 > 0;
92233720368547758071	a
10	c
21474836471	b
hive>
{code}

{code:sql}
$ spark-sql -S
spark-sql> select * from tmp.wym_spark_123 where c1 > 0;
10      c                                                                       
spark-sql> select * from tmp.wym_spark_123 where c1 > 0L;
21474836471	b
10	c
spark-sql> explain select * from tmp.wym_spark_123 where c1 > 0;
== Physical Plan ==
*Project [c1#14, c2#15]
+- *Filter (isnotnull(c1#14) && (cast(c1#14 as int) > 0))
   +- *FileScan parquet tmp.wym_spark_123[c1#14,c2#15] Batched: true, Format: Parquet, Location: InMemoryFileIndex[viewfs://cluster4/user/hive/warehouse/tmp.db/wym_spark_123], PartitionFilters: [], PushedFilters: [IsNotNull(c1)], ReadSchema: struct<c1:string,c2:string>
spark-sql> 
{code}


> BinaryComparison shouldn't auto cast string to int/long
> -------------------------------------------------------
>
>                 Key: SPARK-21646
>                 URL: https://issues.apache.org/jira/browse/SPARK-21646
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Yuming Wang
>
> How to reproduce:
> Hive:
> {code:sql}
> $ hive -S
> hive> create table spark_21646(c1 string, c2 string);
> hive> insert into spark_21646 values('92233720368547758071', 'a');
> hive> insert into spark_21646 values('21474836471', 'b');
> hive> insert into spark_21646 values('10', 'c');
> hive> select * from spark_21646 where c1 > 0;
> 92233720368547758071	a
> 10	c
> 21474836471	b
> hive>
> {code}
> {code:sql}
> $ spark-sql -S
> spark-sql> select * from spark_21646 where c1 > 0;
> 10      c                                                                       
> spark-sql> select * from spark_21646 where c1 > 0L;
> 21474836471	b
> 10	c
> spark-sql> explain select * from spark_21646 where c1 > 0;
> == Physical Plan ==
> *Project [c1#14, c2#15]
> +- *Filter (isnotnull(c1#14) && (cast(c1#14 as int) > 0))
>    +- *FileScan parquet spark_21646[c1#14,c2#15] Batched: true, Format: Parquet, Location: InMemoryFileIndex[viewfs://cluster4/user/hive/warehouse/spark_21646], PartitionFilters: [], PushedFilters: [IsNotNull(c1)], ReadSchema: struct<c1:string,c2:string>
> spark-sql> 
> {code}
> As you can see, spark auto cast c1 to int type, if this value out of integer range, the result is different from Hive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org