You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by nizang <ni...@windward.eu> on 2015/07/08 09:47:20 UTC

SnappyCompressionCodec on the master

hi,

I'm running spark standalone cluster (1.4.0). I have some applications
running with scheduler every hour. I found that on one of the executions,
the job got to be FINISHED after very few seconds (instead of ~5 minutes),
and in the logs on the master, I can see the following exception:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in
stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0
(TID 20, 172.31.6.203): java.io.IOException:
java.lang.reflect.InvocationTargetException
	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1257)
	at
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
	at
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
	at
org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
	at
org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
	at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:59)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
	at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
	at
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
	at
org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:167)
	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1254)
	... 11 more
Caused by: java.lang.IllegalArgumentException
	at
org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
	... 20 more

Driver stacktrace:
	at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
	at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
	at scala.Option.foreach(Option.scala:236)
	at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
	at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
	at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

This job was successful many times before and after this run, and other jobs
were successful in that time

Any idea what can cause that?

thanks, nizan



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SnappyCompressionCodec-on-the-master-tp23711.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: SnappyCompressionCodec on the master

Posted by Josh Rosen <ro...@gmail.com>.
Can you file a JIRA?  https://issues.apache.org/jira/browse/SPARK

On Wed, Jul 8, 2015 at 12:47 AM, nizang <ni...@windward.eu> wrote:

> hi,
>
> I'm running spark standalone cluster (1.4.0). I have some applications
> running with scheduler every hour. I found that on one of the executions,
> the job got to be FINISHED after very few seconds (instead of ~5 minutes),
> and in the logs on the master, I can see the following exception:
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1
> in
> stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0
> (TID 20, 172.31.6.203): java.io.IOException:
> java.lang.reflect.InvocationTargetException
>         at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1257)
>         at
>
> org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
>         at
>
> org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
>         at
>
> org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
>         at
>
> org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
>         at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:59)
>         at org.apache.spark.scheduler.Task.run(Task.scala:70)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
>
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
>         at
>
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
>         at
> org.apache.spark.broadcast.TorrentBroadcast.org
> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>         at
>
> org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:167)
>         at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1254)
>         ... 11 more
> Caused by: java.lang.IllegalArgumentException
>         at
>
> org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
>         ... 20 more
>
> Driver stacktrace:
>         at
> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
>         at
>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
>         at scala.Option.foreach(Option.scala:236)
>         at
>
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
>         at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
>         at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
>         at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>
> This job was successful many times before and after this run, and other
> jobs
> were successful in that time
>
> Any idea what can cause that?
>
> thanks, nizan
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SnappyCompressionCodec-on-the-master-tp23711.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>