You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2017/06/02 07:07:04 UTC

[jira] [Commented] (SPARK-20939) Do not duplicate user-defined functions while optimizing logical query plans

    [ https://issues.apache.org/jira/browse/SPARK-20939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16034247#comment-16034247 ] 

Takeshi Yamamuro commented on SPARK-20939:
------------------------------------------

This is not a bug, so I changed the type to "improvement".
I feel this optimization still makes sense in `most` cases, so I think we do not have a strong reason to remove this rule.
In feature releases, if spark would implement functionality to estimate join cardinality, we could make this rule more smarter...

> Do not duplicate user-defined functions while optimizing logical query plans
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-20939
>                 URL: https://issues.apache.org/jira/browse/SPARK-20939
>             Project: Spark
>          Issue Type: Improvement
>          Components: Optimizer, SQL
>    Affects Versions: 2.1.0
>            Reporter: Lovasoa
>            Priority: Minor
>              Labels: logical_plan, optimizer
>
> Currently, while optimizing a query plan, spark pushes filters down the query plan tree, so that 
> {code:title=LogicalPlan}
> Join Inner, (a = b)
>     +- Filter UDF(a)
>         +- Relation A
>     +- Relation B
> {code}
> becomes 
> {code:title=Optimized LogicalPlan}
>  Join Inner, (a = b)
>      +- Filter UDF(a)
>          +- Relation A
>      +- Filter UDF(b)
>          +- Relation B
> {code}
> In general, it is a good thing to push down filters as it reduces the number of records that will go through the join.
> However, in the case where the filter is an user-defined function (UDF), we cannot know if the cost of executing the function twice will be higher than the eventual cost of joining more elements or not.
> So I think that the optimizer shouldn't move the user-defined function in the query plan tree. The user will still be able to duplicate the function if he wants to.
> See this question on stackoverflow: https://stackoverflow.com/questions/44291078/how-to-tune-the-query-planner-and-turn-off-an-optimization-in-spark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org