You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "leosandylh@gmail.com" <le...@gmail.com> on 2014/02/24 12:14:58 UTC

Re: java.io.NotSerializableException

Which class is not Serializable?

I run shark0.9 has a similarity  exception:
java.io.NotSerializableException (java.io.NotSerializableException: shark.execution.ReduceKeyReduceSide)

java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:28)
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:176)
org.apache.spark.util.collection.ExternalAppendOnlyMap.spill(ExternalAppendOnlyMap.scala:191)
org.apache.spark.util.collection.ExternalAppendOnlyMap.insert(ExternalAppendOnlyMap.scala:141)
org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:59)
org.apache.hadoop.hive.ql.exec.GroupByPostShuffleOperator$$anonfun$7.apply(GroupByPostShuffleOperator.scala:225)
org.apache.hadoop.hive.ql.exec.GroupByPostShuffleOperator$$anonfun$7.apply(GroupByPostShuffleOperator.scala:225)
org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
org.apache.spark.scheduler.Task.run(Task.scala:53)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)




leosandylh@gmail.com

From: yaoxin
Date: 2014-02-24 19:18
To: user
Subject: java.io.NotSerializableException
I got a error 
     org.apache.spark.SparkException: Job aborted: Task not serializable:
java.io.NotSerializableException:
But the class it complains is a java lib class that I dependents on, that I
can't change it to Serializable.
Is there any method to work this around?

I am using Spark 0.9, spark master using local[2] mode.




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-io-NotSerializableException-tp1973.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: java.io.NotSerializableException

Posted by yaoxin <ya...@gmail.com>.
In the end is my exception stack. It is a company internal class that Spark
complains.


org.apache.spark.SparkException: Job aborted: Task not serializable:
java.io.NotSerializableException: com.mycompany.util.xxx
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
    at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
    at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:794)
    at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:737)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:741)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:740)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:740)
    at
org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:569)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/java-io-NotSerializableException-Of-dependent-Java-lib-tp1973p1975.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.