You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "oaksharks (JIRA)" <ji...@apache.org> on 2016/09/28 10:23:20 UTC

[jira] [Closed] (SPARK-17705) spark sql : scala.MatchError: () (of class scala.runtime.BoxedUnit) at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)

     [ https://issues.apache.org/jira/browse/SPARK-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

oaksharks closed SPARK-17705.
-----------------------------
    Resolution: Not A Problem

> spark sql : scala.MatchError: () (of class scala.runtime.BoxedUnit) 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17705
>                 URL: https://issues.apache.org/jira/browse/SPARK-17705
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>         Environment: ubuntu12.4 + idea16 + jdk1.8 + spark1.6.0 + hive1.2.1 + hadoop2.5.1
>            Reporter: oaksharks
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I create a spark-sql UDAF , when I use the function an exception occurs : 
> scala.MatchError: () (of class scala.runtime.BoxedUnit)
> ----
> hear the detail infomation , the exception has nothing to do with my code .
> 16/09/28 17:47:05 ERROR Executor: Exception in task 22.0 in stage 1.0 (TID 23)
> scala.MatchError: () (of class scala.runtime.BoxedUnit)
> 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
> 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)
> 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
> 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
> 	at org.apache.spark.sql.execution.aggregate.ScalaUDAF.eval(udaf.scala:446)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$35.apply(AggregationIterator.scala:376)
> 	at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$35.apply(AggregationIterator.scala:368)
> 	at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:154)
> 	at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
> 	at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org