You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2017/01/18 06:54:26 UTC

[jira] [Comment Edited] (SPARK-19081) spark sql use HIVE UDF throw exception when return a Map value

    [ https://issues.apache.org/jira/browse/SPARK-19081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15827510#comment-15827510 ] 

Takeshi Yamamuro edited comment on SPARK-19081 at 1/18/17 6:54 AM:
-------------------------------------------------------------------

Since the issue the reporter says has been handled in v1.5, I'll close this as resolved.


was (Author: maropu):
Since the issue the reporter says has been handled in v1.6, I'll close this as resolved.

> spark sql use HIVE UDF throw exception when return a Map value
> --------------------------------------------------------------
>
>                 Key: SPARK-19081
>                 URL: https://issues.apache.org/jira/browse/SPARK-19081
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.0
>            Reporter: Davy Song
>
> I have met a problem like https://issues.apache.org/jira/browse/SPARK-3582,
> but not with this parameter Map, my evaluate function return a Map:
> public Map<String, String> evaluate(Text url) {...}
> when run spark-sql with this udf, getting the following exception:
> scala.MatchError: interface java.util.Map (of class java.lang.Class)
>         at org.apache.spark.sql.hive.HiveInspectors$class.javaClassToDataType(HiveInspectors.scala:175)
>         at org.apache.spark.sql.hive.HiveSimpleUdf.javaClassToDataType(hiveUdfs.scala:112)
>         at org.apache.spark.sql.hive.HiveSimpleUdf.dataType$lzycompute(hiveUdfs.scala:144)
>         at org.apache.spark.sql.hive.HiveSimpleUdf.dataType(hiveUdfs.scala:144)
>         at org.apache.spark.sql.catalyst.expressions.Alias.toAttribute(namedExpressions.scala:133)
>         at org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicOperators.scala:25)
>         at org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicOperators.scala:25)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at org.apache.spark.sql.catalyst.plans.logical.Project.output(basicOperators.scala:25)
>         at org.apache.spark.sql.catalyst.plans.logical.InsertIntoTable.resolved$lzycompute(basicOperators.scala:149)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org