You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/12/18 16:00:52 UTC

[jira] [Resolved] (SPARK-12418) spark shuffle FAILED_TO_UNCOMPRESS

     [ https://issues.apache.org/jira/browse/SPARK-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-12418.
-------------------------------
          Resolution: Duplicate
    Target Version/s:   (was: 1.5.1)

Please search JIRA first

> spark shuffle FAILED_TO_UNCOMPRESS
> ----------------------------------
>
>                 Key: SPARK-12418
>                 URL: https://issues.apache.org/jira/browse/SPARK-12418
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.5.1
>         Environment: hadoop 2.3.0
> spark 1.5.1
>            Reporter: dirk.zhang
>
> when use default compression snappy,I get error when spark doing shuffle
> 	Job aborted due to stage failure: Task 19 in stage 2.3 failed 4 times, most recent failure: Lost task 19.3 in stage 2.3 (TID 10311, 192.168.6.36): java.io.IOException: FAILED_TO_UNCOMPRESS(5)
> 	at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
> 	at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
> 	at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:444)
> 	at org.xerial.snappy.Snappy.uncompress(Snappy.java:480)
> 	at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:135)
> 	at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:92)
> 	at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
> 	at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:159)
> 	at org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1179)
> 	at org.apache.spark.shuffle.hash.HashShuffleReader$$anonfun$3.apply(HashShuffleReader.scala:53)
> 	at org.apache.spark.shuffle.hash.HashShuffleReader$$anonfun$3.apply(HashShuffleReader.scala:52)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> 	at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
> 	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> 	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> 	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
> 	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:88)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org