You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by lokeshkumar <lo...@dataken.net> on 2014/11/26 03:42:23 UTC

Issue with Spark latest 1.2.0 build - ClassCastException from [B to SerializableWritable

Hello forum,

We are using spark distro built from the source of latest 1.2.0 tag.
And we are facing the below issue, while trying to act upon the JavaRDD
instance, the stacktrace is given below.
Can anyone please let me know, what can be wrong here?

java.lang.ClassCastException: [B cannot be cast to
org.apache.spark.SerializableWritable
	at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:138)
	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
	at org.apache.spark.api.java.JavaRDDLike$class.take(JavaRDDLike.scala:419)
	at org.apache.spark.api.java.JavaRDD.take(JavaRDD.scala:32)
	at
com.dataken.common.chores.InformationDataLoadChore.run(InformationDataLoadChore.java:69)
	at com.dataken.common.pipeline.DatakenTask.start(DatakenTask.java:110)
	at
com.dataken.tasks.objectcentricprocessor.ObjectCentricProcessTask.execute(ObjectCentricProcessTask.java:99)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
	at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2014-11-26 08:07:38,454 ERROR [DefaultQuartzScheduler_Worker-2]
org.quartz.core.ErrorLogger
Job (report_report.report_report threw an exception.

org.quartz.SchedulerException: Job threw an unhandled exception. [See nested
exception: java.lang.ClassCastException: [B cannot be cast to
org.apache.spark.SerializableWritable]
	at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
	at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.ClassCastException: [B cannot be cast to
org.apache.spark.SerializableWritable
	at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:138)
	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
	at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
	at org.apache.spark.api.java.JavaRDDLike$class.take(JavaRDDLike.scala:419)
	at org.apache.spark.api.java.JavaRDD.take(JavaRDD.scala:32)
	at
com.dataken.common.chores.InformationDataLoadChore.run(InformationDataLoadChore.java:69)
	at com.dataken.common.pipeline.DatakenTask.start(DatakenTask.java:110)
	at
com.dataken.tasks.objectcentricprocessor.ObjectCentricProcessTask.execute(ObjectCentricProcessTask.java:99)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
	... 1 more




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Issue-with-Spark-latest-1-2-0-build-ClassCastException-from-B-to-SerializableWritable-tp19815.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Issue with Spark latest 1.2.0 build - ClassCastException from [B to SerializableWritable

Posted by lokeshkumar <lo...@dataken.net>.
Hi Sean

Thanks for reply,
We upgraded our spark cluster from 1.1.0 to 1.2.0.
And we also thought that this issue might be due to mis matching spark jar
versions.
But we double checked and re installed our app completely in a new system
with spark-1.2.0 distro, but still no result.
Facing the same problem.

This does not happen when master is set to 'local[*]'.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Issue-with-Spark-latest-1-2-0-build-ClassCastException-from-B-to-SerializableWritable-tp19824p19864.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Issue with Spark latest 1.2.0 build - ClassCastException from [B to SerializableWritable

Posted by Sean Owen <so...@cloudera.com>.
I'll take a wild guess that you have mismatching versions of Spark at
play. Your cluster has one build and you're accidentally including
another version.

I think this code path has changed recently (
https://github.com/apache/spark/commit/7e63bb49c526c3f872619ae14e4b5273f4c535e9#diff-83eb37f7b0ebed3c14ccb7bff0d577c2
?) but shouldn't cause this kind of problem per se, I think. Maybe
double-check the version mismatch possibility first.

On Wed, Nov 26, 2014 at 2:42 AM, lokeshkumar <lo...@dataken.net> wrote:
> Hello forum,
>
> We are using spark distro built from the source of latest 1.2.0 tag.
> And we are facing the below issue, while trying to act upon the JavaRDD
> instance, the stacktrace is given below.
> Can anyone please let me know, what can be wrong here?
>
> java.lang.ClassCastException: [B cannot be cast to
> org.apache.spark.SerializableWritable
>         at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:138)
>         at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
>         at org.apache.spark.api.java.JavaRDDLike$class.take(JavaRDDLike.scala:419)
>         at org.apache.spark.api.java.JavaRDD.take(JavaRDD.scala:32)
>         at
> com.dataken.common.chores.InformationDataLoadChore.run(InformationDataLoadChore.java:69)
>         at com.dataken.common.pipeline.DatakenTask.start(DatakenTask.java:110)
>         at
> com.dataken.tasks.objectcentricprocessor.ObjectCentricProcessTask.execute(ObjectCentricProcessTask.java:99)
>         at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
>         at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
> 2014-11-26 08:07:38,454 ERROR [DefaultQuartzScheduler_Worker-2]
> org.quartz.core.ErrorLogger
> Job (report_report.report_report threw an exception.
>
> org.quartz.SchedulerException: Job threw an unhandled exception. [See nested
> exception: java.lang.ClassCastException: [B cannot be cast to
> org.apache.spark.SerializableWritable]
>         at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>         at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
> Caused by: java.lang.ClassCastException: [B cannot be cast to
> org.apache.spark.SerializableWritable
>         at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:138)
>         at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
>         at org.apache.spark.api.java.JavaRDDLike$class.take(JavaRDDLike.scala:419)
>         at org.apache.spark.api.java.JavaRDD.take(JavaRDD.scala:32)
>         at
> com.dataken.common.chores.InformationDataLoadChore.run(InformationDataLoadChore.java:69)
>         at com.dataken.common.pipeline.DatakenTask.start(DatakenTask.java:110)
>         at
> com.dataken.tasks.objectcentricprocessor.ObjectCentricProcessTask.execute(ObjectCentricProcessTask.java:99)
>         at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
>         ... 1 more
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Issue-with-Spark-latest-1-2-0-build-ClassCastException-from-B-to-SerializableWritable-tp19815.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org