You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ameen Akel <am...@gmail.com> on 2016/07/24 17:00:47 UTC

Spark 2.0.0 RC 5 -- java.lang.AssertionError: assertion failed: Block rdd_[*] is not locked for reading

Hello,

I'm working with Spark 2.0.0-rc5 on Mesos (v0.28.2) on a job with ~600
cores.  Every so often, depending on the task that I've run, I'll lose an
executor to an assertion.  Here's an example error:
java.lang.AssertionError: assertion failed: Block rdd_2659_0 is not locked
for reading

I've pasted the rest of the relevant failure from that node at the bottom
of this email.  As far as I can tell, this occurs when I apply a series of
transformations from RDD->Dataframe, but then don't generate an action in
rapid succession.
I'm not sure whether this is a bug, or if something is wrong with my Spark
configuration.  Has someone encountered this error before?

Thanks!
Ameen

---

16/07/24 09:08:25 INFO MemoryStore: Block rdd_2659_0 stored as values in
memory (estimated size 1269.8 KB, free 319.8 GB)
16/07/24 09:08:25 INFO CodeGenerator: Code generated in 10.279499 ms
16/07/24 09:08:25 INFO PythonRunner: Times: total = 31, boot = -14876, init
= 14906, finish = 1
16/07/24 09:08:25 WARN Executor: 1 block locks were not released by TID =
94279:
[rdd_2659_0]
16/07/24 09:08:25 ERROR Utils: Uncaught exception in thread stdout writer
for /usr/bin/python
java.lang.AssertionError: assertion failed: Block rdd_2659_0 is not locked
for reading
    at scala.Predef$.assert(Predef.scala:170)
    at
org.apache.spark.storage.BlockInfoManager.unlock(BlockInfoManager.scala:294)
    at
org.apache.spark.storage.BlockManager.releaseLock(BlockManager.scala:628)
    at
org.apache.spark.storage.BlockManager$$anonfun$1.apply$mcV$sp(BlockManager.scala:435)
    at
org.apache.spark.util.CompletionIterator$$anon$1.completion(CompletionIterator.scala:46)
    at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:35)
    at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificColumnarIterator.hasNext(Unknown
Source)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:120)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
    at
org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
    at
org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
    at
org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
16/07/24 09:08:25 INFO Executor: Finished task 0.0 in stage 410.0 (TID
94279). 6414 bytes result sent to driver
16/07/24 09:08:25 ERROR SparkUncaughtExceptionHandler: Uncaught exception
in thread Thread[stdout writer for /usr/bin/python,5,main]
java.lang.AssertionError: assertion failed: Block rdd_2659_0 is not locked
for reading
    at scala.Predef$.assert(Predef.scala:170)
    at
org.apache.spark.storage.BlockInfoManager.unlock(BlockInfoManager.scala:294)
    at
org.apache.spark.storage.BlockManager.releaseLock(BlockManager.scala:628)
    at
org.apache.spark.storage.BlockManager$$anonfun$1.apply$mcV$sp(BlockManager.scala:435)
    at
org.apache.spark.util.CompletionIterator$$anon$1.completion(CompletionIterator.scala:46)
    at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:35)
    at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificColumnarIterator.hasNext(Unknown
Source)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:120)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at
org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
    at
org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
    at
org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
    at
org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
16/07/24 09:08:25 INFO DiskBlockManager: Shutdown hook called
16/07/24 09:08:25 INFO ShutdownHookManager: Shutdown hook called
16/07/24 09:08:25 INFO ShutdownHookManager: Deleting directory
/var/lib/mesos/slaves/10b8e3a5-c19a-46c8-915d-68fbaa0e6ef6-S16/frameworks/f3a044b5-4e77-478c-b71f-db1361a5520b-0098/executors/9/runs/10017ffd-8976-4f0a-aafe-1f2c74191089/spark-e43514ff-ea25-4da8-8c8a-67361091e26c