You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Swapnil Chougule <th...@gmail.com> on 2018/09/04 10:50:33 UTC

Spark hive udf: no handler for UDAF analysis exception

Created one project 'spark-udf' & written hive udf as below:

    package com.spark.udf
    import org.apache.hadoop.hive.ql.exec.UDF

    class UpperCase extends UDF with Serializable {
      def evaluate(input: String): String = {
        input.toUpperCase
      }

Built it & created jar for it. Tried to use this udf in another spark
program:

    spark.sql("CREATE OR REPLACE FUNCTION uppercase AS
'com.spark.udf.UpperCase' USING JAR
'/home/swapnil/spark-udf/target/spark-udf-1.0.jar'")

But following line is giving me exception:

    spark.sql("select uppercase(Car) as NAME from cars").show

*Exception:*

> Exception in thread "main" org.apache.spark.sql.AnalysisException: No
> handler for UDAF 'com.dcengines.fluir.udf.Strlen'. Use
> sparkSession.udf.register(...) instead.; line 1 pos 7     at
>
org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeFunctionExpression(SessionCatalog.scala:1105)
>     at
>
org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085)
>     at
>
org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085)
>     at
>
org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:115)
>     at
>
org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupFunction(SessionCatalog.scala:1247)
>     at
>
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226)
>     at
>
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226)
>     at
>
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)

Any help around this is really appreciated.

Re: Spark hive udf: no handler for UDAF analysis exception

Posted by Swapnil Chougule <th...@gmail.com>.
Looks like Spark Session has only implementation for UDAF but not for UDF.
Is it a bug or some work around is there ?
T.Gaweda has opened JIRA for this. SPARK-25334

Thanks,
Swapnil

On Tue, Sep 4, 2018 at 4:20 PM Swapnil Chougule <th...@gmail.com>
wrote:

> Created one project 'spark-udf' & written hive udf as below:
>
>     package com.spark.udf
>     import org.apache.hadoop.hive.ql.exec.UDF
>
>     class UpperCase extends UDF with Serializable {
>       def evaluate(input: String): String = {
>         input.toUpperCase
>       }
>
> Built it & created jar for it. Tried to use this udf in another spark
> program:
>
>     spark.sql("CREATE OR REPLACE FUNCTION uppercase AS
> 'com.spark.udf.UpperCase' USING JAR
> '/home/swapnil/spark-udf/target/spark-udf-1.0.jar'")
>
> But following line is giving me exception:
>
>     spark.sql("select uppercase(Car) as NAME from cars").show
>
> *Exception:*
>
> > Exception in thread "main" org.apache.spark.sql.AnalysisException: No
> > handler for UDAF 'com.dcengines.fluir.udf.Strlen'. Use
> > sparkSession.udf.register(...) instead.; line 1 pos 7     at
> >
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeFunctionExpression(SessionCatalog.scala:1105)
> >     at
> >
> org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085)
> >     at
> >
> org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085)
> >     at
> >
> org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:115)
> >     at
> >
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupFunction(SessionCatalog.scala:1247)
> >     at
> >
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226)
> >     at
> >
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226)
> >     at
> >
> org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)
>
> Any help around this is really appreciated.
>
>
>