You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2017/03/03 11:07:45 UTC

[jira] [Commented] (SPARK-19701) the `in` operator in pyspark is broken

    [ https://issues.apache.org/jira/browse/SPARK-19701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894155#comment-15894155 ] 

Hyukjin Kwon commented on SPARK-19701:
--------------------------------------

[~cloud_fan], I took a look this for my curiosity. It seems this is what happens now :

{code}
class Column(object):
    def __contains__(self, item):
        print "I am contains"
        return Column()
    def __nonzero__(self):
        raise Exception("I am nonzero.")

>>> 1 in Column()
I am contains
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 6, in __nonzero__
Exception: I am nonzero.
{code}

It seems it calls {{__contains__}} first and then {{__nonzero__}} or {{__bool__}} is being called against {{Column()}}
to make this a bool.

It seems {{__nonzero__}} (for Python 2), {{__bool__}} (for Python 3) and {{__contains__}} forcing the the return
into a bool unlike other operators.

I also referred the references as below to check my assumption and little knowledge:

http://stackoverflow.com/questions/12244074/python-source-code-for-built-in-in-operator/12244378#12244378
http://stackoverflow.com/questions/38542543/functionality-of-python-in-vs-contains/38542777

I tested the codes above in 1.6.3, 2.1.0 and in the master branch. It seems it has not been working so far.

Should we maybe remove this?

> the `in` operator in pyspark is broken
> --------------------------------------
>
>                 Key: SPARK-19701
>                 URL: https://issues.apache.org/jira/browse/SPARK-19701
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.2.0
>            Reporter: Wenchen Fan
>
> {code}
> >>> textFile = spark.read.text("/Users/cloud/dev/spark/README.md")
> >>> linesWithSpark = textFile.filter("Spark" in textFile.value)
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "/Users/cloud/product/spark/python/pyspark/sql/column.py", line 426, in __nonzero__
>     raise ValueError("Cannot convert column into bool: please use '&' for 'and', '|' for 'or', "
> ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org