You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marco Gaido (JIRA)" <ji...@apache.org> on 2017/10/19 11:44:00 UTC

[jira] [Comment Edited] (SPARK-20617) pyspark.sql filtering fails when using ~isin when there are nulls in column

    [ https://issues.apache.org/jira/browse/SPARK-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16210898#comment-16210898 ] 

Marco Gaido edited comment on SPARK-20617 at 10/19/17 11:43 AM:
----------------------------------------------------------------

This is not a bug. This is the right and expected behavior according to SQL standards. Indeed, every operation involving null, is evaluated to null. You can easily check this behavior running:

{code:java}
spark.sql("select null in ('a')")
{code}

Then, in a filter expression null is considered to be false. So you have this behavior which is the right one. Your first "workaround" is the right way to go.
Thanks.


was (Author: mgaido):
This is not a bug. This is the right and expected behavior according to SQL standards. Indeed, every operation involving null, is evaluated to null. You can easily check this behavior running:

{code:java}
// Some comments here
spark.sql("select null in ('a')")
{code}

Then, in a filter expression null is considered to be false. So you have this behavior which is the right one. Your first "workaround" is the right way to go.
Thanks.

> pyspark.sql filtering fails when using ~isin when there are nulls in column
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-20617
>                 URL: https://issues.apache.org/jira/browse/SPARK-20617
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 2.2.0
>         Environment: Ubuntu Xenial 16.04, Python 3.5
>            Reporter: Ed Lee
>
> Hello encountered a filtering bug using 'isin' in pyspark sql on version 2.2.0, Ubuntu 16.04.    
> Enclosed below an example to replicate:
> from pyspark.sql import SparkSession
> from pyspark.sql import functions as sf
> import pandas as pd
> spark = SparkSession.builder.master("local").appName("Word Count").getOrCreate()
> test_df = pd.DataFrame({"col1": [None, None, "a", "b", "c"],
>                         "col2": range(5)
>                         })
> test_sdf = spark.createDataFrame(test_df)
> test_sdf.show()
>  |col1|col2|
>  |null|   0|
>  |null|   1|
>  |   a|   2|
>  |   b|   3|
>  |   c|   4|
> # Below shows when filtering col1 NOT in list ['a'] the col1 rows with null are missing:
> test_sdf.filter(sf.col("col1").isin(["a"]) == False).show()
> Or:
> test_sdf.filter(~sf.col("col1").isin(["a"])).show()
> *Expecting*:
>  |col1|col2|
>  |null|   0|
>  |null|   1|
>  |   b|   3|
>  |   c|   4|
> *Got*:
>  |col1|col2|
>  |   b|   3|
>  |   c|   4|
> My workarounds:
> 1.  null is considered 'in', so add OR isNull conditon:
> test_sdf.filter((sf.col("col1").isin(["a"])== False) | (
> sf.col("col1").isNull())).show()
> To get:
>  |col1|col2|isin|
>  |null|   0|null|
>  |null|   1|null|
>  |   c|   4|null|
>  |   b|   3|null|
> 2.  Use left join and filter
> join_df = pd.DataFrame({"col1": ["a"],
>                         "isin": 1
>                         })
> join_sdf = spark.createDataFrame(join_df)
> test_sdf.join(join_sdf, on="col1", how="left") \
>     .filter(sf.col("isin").isNull()) \
>     .show()
> To get:
>  |col1|col2|isin|
>  |null|   0|null|
>  |null|   1|null|
>  |   c|   4|null|
>  |   b|   3|null|
> Thank you



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org