You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ohad Raviv (JIRA)" <ji...@apache.org> on 2016/12/06 19:58:58 UTC

[jira] [Closed] (SPARK-18747) UDF multiple evaluations causes very poor performance

     [ https://issues.apache.org/jira/browse/SPARK-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ohad Raviv closed SPARK-18747.
------------------------------
    Resolution: Duplicate

> UDF multiple evaluations causes very poor performance
> -----------------------------------------------------
>
>                 Key: SPARK-18747
>                 URL: https://issues.apache.org/jira/browse/SPARK-18747
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.1
>            Reporter: Ohad Raviv
>
> We have a use case where we have a relatively expensive UDF that needs to be calculated. The problem is that instead of being calculated once, it gets calculated over and over again.
> for example:
> {quote}
> def veryExpensiveCalc(str:String) = \{println("blahblah1"); "nothing"\}
> hiveContext.udf.register("veryExpensiveCalc", veryExpensiveCalc _)
> hiveContext.sql("select * from (select veryExpensiveCalc('a') c)z where c is not null and c<>''").show
> {quote}
> with the output:
> {quote}
> blahblah1
> blahblah1
> blahblah1
> +-------+
> |      c|
> +-------+
> |nothing|
> +-------+
> {quote}
> You can see that for each reference of column "c" you will get the println.
> that causes very poor performance for our real use case.
> This also came out on StackOverflow:
> http://stackoverflow.com/questions/40320563/spark-udf-called-more-than-once-per-record-when-df-has-too-many-columns
> http://stackoverflow.com/questions/34587596/trying-to-turn-a-blob-into-multiple-columns-in-spark/
> with two problematic work-arounds:
> 1. cache() after the first time. e.g.
> {quote}
> hiveContext.sql("select veryExpensiveCalc('a') as c").cache().where("c is not null and c<>''").show
> {quote}
> while it works, in our case we can't do that because the table is too big to cache.
> 2. move back and forth to rdd:
> {quote}
> val df = hiveContext.sql("select veryExpensiveCalc('a') as c")
> hiveContext.createDataFrame(df.rdd, df.schema).where("c is not null and c<>''").show
> {quote}
> which works but then we loose some of the optimizations like push down predicate features, etc. and its very ugly.
> Any ideas on how we can make the UDF get calculated just once in a reasonable way?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org