You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mike Chan (JIRA)" <ji...@apache.org> on 2019/04/18 03:57:00 UTC

[jira] [Commented] (SPARK-25422) flaky test: org.apache.spark.DistributedSuite.caching on disk, replicated (encryption = on) (with replication as stream)

    [ https://issues.apache.org/jira/browse/SPARK-25422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16820707#comment-16820707 ] 

Mike Chan commented on SPARK-25422:
-----------------------------------

Will this problem potentially hitting Spark 2.3.1 as well? I have a new cluster at this version and always hitting corrupt remote block when 1 specific table involved. 

> flaky test: org.apache.spark.DistributedSuite.caching on disk, replicated (encryption = on) (with replication as stream)
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-25422
>                 URL: https://issues.apache.org/jira/browse/SPARK-25422
>             Project: Spark
>          Issue Type: Test
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Wenchen Fan
>            Assignee: Imran Rashid
>            Priority: Major
>             Fix For: 2.4.0
>
>
> stacktrace
> {code}
>  org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 7, localhost, executor 1): java.io.IOException: org.apache.spark.SparkException: corrupt remote block broadcast_0_piece0 of broadcast_0: 1651574976 != 1165629262
> 	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1320)
> 	at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:207)
> 	at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:66)
> 	at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:66)
> 	at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:96)
> 	at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:84)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:121)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$7.apply(Executor.scala:367)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1347)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:373)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.SparkException: corrupt remote block broadcast_0_piece0 of broadcast_0: 1651574976 != 1165629262
> 	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:167)
> 	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:151)
> 	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:151)
> 	at scala.collection.immutable.List.foreach(List.scala:392)
> 	at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:151)
> 	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1$$anonfun$apply$2.apply(TorrentBroadcast.scala:231)
> 	at scala.Option.getOrElse(Option.scala:121)
> 	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:211)
> 	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1313)
> 	... 13 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org