You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2014/10/21 00:51:33 UTC

[jira] [Commented] (SPARK-4018) RDD.reduce failing with java.lang.ClassCastException: org.apache.spark.SparkContext$$anonfun$26 cannot be cast to scala.Function2

    [ https://issues.apache.org/jira/browse/SPARK-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177592#comment-14177592 ] 

Sean Owen commented on SPARK-4018:
----------------------------------

Your sample code is Java, but the error seems to concern the Scala API. Are you sure the exception occurs on this invocation? Does it compile and then fail at runtime? or are you operating just in the shell?

> RDD.reduce failing with java.lang.ClassCastException: org.apache.spark.SparkContext$$anonfun$26 cannot be cast to scala.Function2
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-4018
>                 URL: https://issues.apache.org/jira/browse/SPARK-4018
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.1.1
>            Reporter: Haithem Turki
>            Priority: Critical
>
> Hey all,
> A simple reduce operation against Spark 1.1.1 is giving me following exception:
> {code}
> 14/10/20 16:27:22 ERROR executor.Executor: Exception in task 9.7 in stage 0.0 (TID 1001)
> java.lang.ClassCastException: org.apache.spark.SparkContext$$anonfun$26 cannot be cast to scala.Function2
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> {code}
> My code is a relatively simple map-reduce:
> {code}
>  Map<String, Foo> aggregateTracker = rdd.map(new MapFunction(list))
>                 .reduce(new ReduceFunction());
> {code}
> Where:
> - MapFunction is of type Function<Record, Map<String, Object>
> - ReduceFunction is of type Function2<Map<String, Foo>, Map<String, Foo>, Map<String, Foo>>
> - list is just a list of Foo2
> Both Foo1 and Foo2 are serializable
> I've tried this with both the Java and Scala API, lines for each are:
> org.apache.spark.api.java.JavaRDD.reduce(JavaRDD.scala:32)
> org.apache.spark.rdd.RDD.reduce(RDD.scala:861)
> The thing being flagged is always: org.apache.spark.SparkContext$$anonfun$26 (the number doesn't change).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org