You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/12/07 08:21:58 UTC

[jira] [Commented] (SPARK-18766) Push Down Filter Through BatchEvalPython

    [ https://issues.apache.org/jira/browse/SPARK-18766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15728101#comment-15728101 ] 

Apache Spark commented on SPARK-18766:
--------------------------------------

User 'gatorsmile' has created a pull request for this issue:
https://github.com/apache/spark/pull/16193

> Push Down Filter Through BatchEvalPython
> ----------------------------------------
>
>                 Key: SPARK-18766
>                 URL: https://issues.apache.org/jira/browse/SPARK-18766
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, SQL
>    Affects Versions: 2.0.2
>            Reporter: Xiao Li
>
> Currently, when users use Python UDF in Filter, {{BatchEvalPython}} is always generated below {{FilterExec}}. However, not all the predicates need to be evaluated after Python UDF execution. Thus, we can push down the predicates through {{BatchEvalPython}} .
> {noformat}
> >>> df = spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], ["key", "value"])
> >>> from pyspark.sql.functions import udf, col
> >>> from pyspark.sql.types import BooleanType
> >>> my_filter = udf(lambda a: a < 2, BooleanType())
> >>> sel = df.select(col("key"), col("value")).filter((my_filter(col("key"))) & (df.value < "2"))
> >>> sel.explain(True)
> {noformat}
> {noformat}
> == Physical Plan ==
> *Project [key#0L, value#1]
> +- *Filter ((isnotnull(value#1) && pythonUDF0#9) && (value#1 < 2))
>    +- BatchEvalPython [<lambda>(key#0L)], [key#0L, value#1, pythonUDF0#9]
>       +- Scan ExistingRDD[key#0L,value#1]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org