You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:16:32 UTC

[jira] [Resolved] (SPARK-23764) Utils.tryWithSafeFinally swollows fatal exceptions in the finally block

     [ https://issues.apache.org/jira/browse/SPARK-23764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-23764.
----------------------------------
    Resolution: Incomplete

> Utils.tryWithSafeFinally swollows fatal exceptions in the finally block
> -----------------------------------------------------------------------
>
>                 Key: SPARK-23764
>                 URL: https://issues.apache.org/jira/browse/SPARK-23764
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 2.2.0
>            Reporter: Yavgeni Hotimsky
>            Priority: Major
>              Labels: bulk-closed
>
> This is from my driver stdout:
> {noformat}
> [dag-scheduler-event-loop] WARN  org.apache.spark.util.Utils - Suppressing exception in finally: Java heap space
> java.lang.OutOfMemoryError: Java heap space
>        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
>        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$3.apply(TorrentBroadcast.scala:271)
>        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$3.apply(TorrentBroadcast.scala:271)
>        at org.apache.spark.util.io.ChunkedByteBufferOutputStream.allocateNewChunkIfNeeded(ChunkedByteBufferOutputStream.scala:87)
>        at org.apache.spark.util.io.ChunkedByteBufferOutputStream.write(ChunkedByteBufferOutputStream.scala:75)
>        at net.jpountz.lz4.LZ4BlockOutputStream.flushBufferedData(LZ4BlockOutputStream.java:205)
>        at net.jpountz.lz4.LZ4BlockOutputStream.write(LZ4BlockOutputStream.java:158)
>        at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
>        at java.io.ObjectOutputStream$BlockDataOutputStream.flush(ObjectOutputStream.java:1822)
>        at java.io.ObjectOutputStream.flush(ObjectOutputStream.java:719)
>        at java.io.ObjectOutputStream.close(ObjectOutputStream.java:740)
>        at org.apache.spark.serializer.JavaSerializationStream.close(JavaSerializer.scala:57)
>        at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$blockifyObject$1.apply$mcV$sp(TorrentBroadcast.scala:278)
>        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1346)
>        at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:277)
>        at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:126)
>        at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
>        at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>        at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
>        at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1488)
>        at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1006)
>        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:930)
>        at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:933)
>        at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:932)
>        at scala.collection.immutable.List.foreach(List.scala:381)
>        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:932)
>        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:874)
>        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1695)
>        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
>        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
>        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {noformat}
> After this my driver stayed up but all my streaming queries stopped triggering. I would of course expect the driver to terminate in this case.
>  
> This util shouldn't suppress fatal exceptions in the finally block. The fix is as simple as replacing Throwable with NonFatal(e) in the finally block. Also the util {color:#333333}tryWithSafeFinallyAndFailureCallbacks should behave the same.
> {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org