You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lior Chaga (JIRA)" <ji...@apache.org> on 2015/08/10 16:06:46 UTC

[jira] [Commented] (SPARK-4300) Race condition during SparkWorker shutdown

    [ https://issues.apache.org/jira/browse/SPARK-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14680154#comment-14680154 ] 

Lior Chaga commented on SPARK-4300:
-----------------------------------

Also exists in spark 1.4:

{panel}
12:31:10.821 [File appending thread for /var/lib/spark/data/disk1/app-20150809122638-0000/13/stdout] ERROR org.apache.spark.util.logging.FileAppender - Error writing stream to file /var/lib/spark/data/disk1/app-2015080
9122638-0000/13/stdout
java.io.IOException: Stream closed
        at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:145) ~[?:1.6.0_41]
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:255) ~[?:1.6.0_41]
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317) ~[?:1.6.0_41]
        at java.io.FilterInputStream.read(FilterInputStream.java:90) ~[?:1.6.0_41]
        at org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
{panel}

And later on I see:
{panel}
12:22:30.861 [sparkWorker-akka.actor.default-dispatcher-2] ERROR akka.actor.ActorSystemImpl - Uncaught fatal error from thread [sparkWorker-akka.remote.default-remote-dispatcher-5] shutting down ActorSystem [sparkWorker]
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.spark_project.protobuf.ByteString.copyFrom(ByteString.java:192) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at org.spark_project.protobuf.ByteString.copyFrom(ByteString.java:204) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.serialization.MessageContainerSerializer.serializeSelection(MessageContainerSerializer.scala:36) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.serialization.MessageContainerSerializer.toBinary(MessageContainerSerializer.scala:25) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.MessageSerializer$.serialize(MessageSerializer.scala:36) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointWriter.serializeMessage(Endpoint.scala:844) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointWriter.writeSend(Endpoint.scala:747) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointWriter$$anonfun$2.applyOrElse(Endpoint.scala:722) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415) ~[spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.actor.ActorCell.invoke(ActorCell.scala:487) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.dispatch.Mailbox.run(Mailbox.scala:220) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [spark-assembly-1.4.0-hadoop2.2.0.jar:1.4.0]
Exception in thread "qtp1853216600-31" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.spark-project.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:708)
        at org.spark-project.jetty.io.nio.SelectorManager$1.run(SelectorManager.java:290)
        at org.spark-project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at org.spark-project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:662)Exception in thread "qtp1853216600-37" java.lang.OutOfMemoryError: GC overhead limit exceeded
{panel}

In application log I see:
{panel}
2015-08-10 12:41:33,761 WARN  [task-result-getter-0] TaskSetManager - Lost task 165.2 in stage 207.7 (TID 141815, 10.10.0.83): java.io.IOException: Failed to connect to /10.10.0.67:42846
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:193)
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
        at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:88)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
        at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)
{panel}

> Race condition during SparkWorker shutdown
> ------------------------------------------
>
>                 Key: SPARK-4300
>                 URL: https://issues.apache.org/jira/browse/SPARK-4300
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 1.1.0
>            Reporter: Alex Liu
>            Assignee: Sean Owen
>            Priority: Minor
>             Fix For: 1.2.2, 1.3.1, 1.4.0
>
>
> When a shark job is done. there are some error message as following show in the log
> {code}
> INFO 22:10:41,635 SparkMaster: akka.tcp://sparkDriver@ip-172-31-11-204.us-west-1.compute.internal:57641 got disassociated, removing it.
>  INFO 22:10:41,640 SparkMaster: Removing app app-20141106221014-0000
>  INFO 22:10:41,687 SparkMaster: Removing application Shark::ip-172-31-11-204.us-west-1.compute.internal
>  INFO 22:10:41,710 SparkWorker: Asked to kill executor app-20141106221014-0000/0
>  INFO 22:10:41,712 SparkWorker: Runner thread for executor app-20141106221014-0000/0 interrupted
>  INFO 22:10:41,714 SparkWorker: Killing process!
> ERROR 22:10:41,738 SparkWorker: Error writing stream to file /var/lib/spark/work/app-20141106221014-0000/0/stdout
> ERROR 22:10:41,739 SparkWorker: java.io.IOException: Stream closed
> ERROR 22:10:41,739 SparkWorker: 	at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
> ERROR 22:10:41,740 SparkWorker: 	at java.io.BufferedInputStream.read1(BufferedInputStream.java:272)
> ERROR 22:10:41,740 SparkWorker: 	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> ERROR 22:10:41,740 SparkWorker: 	at java.io.FilterInputStream.read(FilterInputStream.java:107)
> ERROR 22:10:41,741 SparkWorker: 	at org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70)
> ERROR 22:10:41,741 SparkWorker: 	at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39)
> ERROR 22:10:41,741 SparkWorker: 	at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
> ERROR 22:10:41,742 SparkWorker: 	at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
> ERROR 22:10:41,742 SparkWorker: 	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1311)
> ERROR 22:10:41,742 SparkWorker: 	at org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38)
>  INFO 22:10:41,838 SparkMaster: Connected to Cassandra cluster: 4299
>  INFO 22:10:41,839 SparkMaster: Adding host 172.31.11.204 (Analytics)
>  INFO 22:10:41,840 SparkMaster: New Cassandra host /172.31.11.204:9042 added
>  INFO 22:10:41,841 SparkMaster: Adding host 172.31.11.204 (Analytics)
>  INFO 22:10:41,842 SparkMaster: Adding host 172.31.11.204 (Analytics)
>  INFO 22:10:41,852 SparkMaster: akka.tcp://sparkDriver@ip-172-31-11-204.us-west-1.compute.internal:57641 got disassociated, removing it.
>  INFO 22:10:41,853 SparkMaster: akka.tcp://sparkDriver@ip-172-31-11-204.us-west-1.compute.internal:57641 got disassociated, removing it.
>  INFO 22:10:41,853 SparkMaster: akka.tcp://sparkDriver@ip-172-31-11-204.us-west-1.compute.internal:57641 got disassociated, removing it.
>  INFO 22:10:41,857 SparkMaster: akka.tcp://sparkDriver@ip-172-31-11-204.us-west-1.compute.internal:57641 got disassociated, removing it.
>  INFO 22:10:41,862 SparkMaster: Adding host 172.31.11.204 (Analytics)
>  WARN 22:10:42,200 SparkMaster: Got status update for unknown executor app-20141106221014-0000/0
>  INFO 22:10:42,211 SparkWorker: Executor app-20141106221014-0000/0 finished with state KILLED exitStatus 143
> {code}
> /var/lib/spark/work/app-20141106221014-0000/0/stdout is on the disk. It is trying to write to a close IO stream. 
> Spark worker shuts down by {code}
>  private def killProcess(message: Option[String]) {
>     var exitCode: Option[Int] = None
>     logInfo("Killing process!")
>     process.destroy()
>     process.waitFor()
>     if (stdoutAppender != null) {
>       stdoutAppender.stop()
>     }
>     if (stderrAppender != null) {
>       stderrAppender.stop()
>     }
>     if (process != null) {
>     exitCode = Some(process.waitFor())
>     }
>     worker ! ExecutorStateChanged(appId, execId, state, message, exitCode)
>  
> {code}
> But stdoutAppender concurrently writes to output log file, which creates race condition. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org