You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/06/10 21:15:00 UTC

[jira] [Updated] (SPARK-8295) SparkContext shut down in Spark Project Streaming test suite

     [ https://issues.apache.org/jira/browse/SPARK-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-8295:
-----------------------------
    Priority: Minor  (was: Blocker)

Definitely not a blocker, but worth tracking down if it can be reproduced elsewhere. The thing it, these tests appear to be passing elsewhere right now on Jenkins, including for Hadoop 2.4. 

> SparkContext shut down in Spark Project Streaming test suite
> ------------------------------------------------------------
>
>                 Key: SPARK-8295
>                 URL: https://issues.apache.org/jira/browse/SPARK-8295
>             Project: Spark
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 1.3.1
>         Environment: * Ubuntu 12.04.05 LTS (GNU/Linux 3.13.0-53-generic x86_64) running on VM
> * java version 1.8.0_45
> * Spark 1.3.1, branch 1.3
>            Reporter: Francois-Xavier Lemire
>            Priority: Minor
>
> *Command to build Spark:*
> build/mvn -Pyarn -Phadoop-2.4 -Pspark-ganglia-lgpl -Dhadoop.version=2.4.0 -DskipTests clean package
> *Command to test Spark:*
> build/mvn -Pyarn -Phadoop-2.4 -Pspark-ganglia-lgpl -Dhadoop.version=2.4.0 test
> *Error:*
> Tests fail at 'Spark Project Streaming'. 
> *Log:*
> [...]
> - awaitTermination
> - awaitTermination after stop
> - awaitTermination with error in task
> - awaitTermination with error in job generation
> - awaitTerminationOrTimeout
> Exception in thread "Thread-1053" org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:699)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:698)
>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>         at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:698)
>         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1411)
>         at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
>         at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1346)
>         at org.apache.spark.SparkContext.stop(SparkContext.scala:1398)
>         at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:580)
>         at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:555)
>         at org.apache.spark.streaming.testPackage.package$.test(StreamingContextSuite.scala:437)
>         at org.apache.spark.streaming.StreamingContextSuite$$anonfun$25.apply$mcV$sp(StreamingContextSuite.scala:332)
>         at org.apache.spark.streaming.StreamingContextSuite$$anonfun$25.apply(StreamingContextSuite.scala:332)
>         at org.apache.spark.streaming.StreamingContextSuite$$anonfun$25.apply(StreamingContextSuite.scala:332)
>         at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>         at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>         at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>         at org.scalatest.Transformer.apply(Transformer.scala:22)
>         at org.scalatest.Transformer.apply(Transformer.scala:20)
>         at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
>         at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
>         at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
>         at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>         at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>         at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
>         at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
>         at org.apache.spark.streaming.StreamingContextSuite.org$scalatest$BeforeAndAfter$$super$runTest(StreamingContextSuite.scala:35)
>         at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
>         at org.apache.spark.streaming.StreamingContextSuite.runTest(StreamingContextSuite.scala:35)
>         at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>         at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>         at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
>         at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
>         at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
>         at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
>         at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
>         at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
>         at org.scalatest.Suite$class.run(Suite.scala:1424)
>         at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
>         at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
>         at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
>         at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
>         at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
>         at org.apache.spark.streaming.StreamingContextSuite.org$scalatest$BeforeAndAfter$$super$run(StreamingContextSuite.scala:35)
>         at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
>         at org.apache.spark.streaming.StreamingContextSuite.run(StreamingContextSuite.scala:35)
>         at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
>         at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
>         at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
>         at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
>         at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
>         at org.scalatest.Suite$class.run(Suite.scala:1421)
>         at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
>         at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
>         at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
>         at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
>         at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
>         at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
>         at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
>         at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
>         at org.scalatest.tools.Runner$.main(Runner.scala:860)
>         at org.scalatest.tools.Runner.main(Runner.scala)
> - DStream and generated RDD creation sites
> CheckpointSuite:
> - basic rdd checkpoints + dstream graph checkpoint recovery
> - recovery of conf through checkpoints
> - recovery with map and reduceByKey operations
> - recovery with invertible reduceByKeyAndWindow operation
> - recovery with saveAsHadoopFiles operation
> - recovery with saveAsNewAPIHadoopFiles operation
> - recovery with saveAsHadoopFile inside transform operation
> - recovery with updateStateByKey operation
> Exception in thread "pool-438-thread-1" java.lang.Error: java.lang.InterruptedException
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.InterruptedException
>         at java.lang.Object.wait(Native Method)
>         at java.lang.Object.wait(Object.java:502)
>         at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
>         at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:513)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1484)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1502)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1516)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1530)
>         at org.apache.spark.rdd.RDD.collect(RDD.scala:813)
>         at org.apache.spark.streaming.TestOutputStream$$anonfun$$init$$1.apply(TestSuiteBase.scala:74)
>         at org.apache.spark.streaming.TestOutputStream$$anonfun$$init$$1.apply(TestSuiteBase.scala:73)
>         at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:42)
>         at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
>         at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
>         at scala.util.Try$.apply(Try.scala:161)
> at org.apache.spark.streaming.scheduler.Job.run(Job.scala:32)
>         at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:180)
>         at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:180)
>         at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:180)
>         at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>         at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:179)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         ... 2 more
> - recovery with file input stream *** FAILED ***
>   Set(10, 1, 6, 28, 21, 3, 36, 15) did not equal Set(10, 1, 6, 28, 21, 45, 3, 36, 15) (CheckpointSuite.scala:477)
> FailureSuite:
> - multiple failures with map
> [...]
> - updateStateByKey
> - updateStateByKey - simple with initial value RDD
> - updateStateByKey - with initial value RDD
> - updateStateByKey - object lifecycle
> - slice
> - slice - has not been initialized
> - rdd cleanup - map and window
> - rdd cleanup - updateStateByKey
> Exception in thread "Thread-2975" org.apache.spark.SparkException: Job cancelled because SparkContext was shut down
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:699)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:698)
>         at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>         at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:698)
>         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1411)
>         at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
>         at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1346)
>         at org.apache.spark.SparkContext.stop(SparkContext.scala:1398)
>         at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:580)
>         at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:555)
>         at org.apache.spark.streaming.TestSuiteBase$class.withStreamingContext(TestSuiteBase.scala:250)
>         at org.apache.spark.streaming.BasicOperationsSuite.withStreamingContext(BasicOperationsSuite.scala:33)
>         at org.apache.spark.streaming.BasicOperationsSuite$$anonfun$33$$anonfun$apply$mcV$sp$20.apply(BasicOperationsSuite.scala:556)
>         at org.apache.spark.streaming.BasicOperationsSuite$$anonfun$33$$anonfun$apply$mcV$sp$20.apply(BasicOperationsSuite.scala:555)
>         at org.apache.spark.streaming.TestSuiteBase$class.withTestServer(TestSuiteBase.scala:264)
>         at org.apache.spark.streaming.BasicOperationsSuite.withTestServer(BasicOperationsSuite.scala:33)
>         at org.apache.spark.streaming.BasicOperationsSuite$$anonfun$33.apply$mcV$sp(BasicOperationsSuite.scala:555)
>         at org.apache.spark.streaming.BasicOperationsSuite$$anonfun$33.apply(BasicOperationsSuite.scala:555)
>         at org.apache.spark.streaming.BasicOperationsSuite$$anonfun$33.apply(BasicOperationsSuite.scala:555)
>         at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>         at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>         at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>         at org.scalatest.Transformer.apply(Transformer.scala:22)
>         at org.scalatest.Transformer.apply(Transformer.scala:20)
>         at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
>         at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
>         at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
>         at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>         at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>         at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
>         at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
>         at org.apache.spark.streaming.BasicOperationsSuite.org$scalatest$BeforeAndAfter$$super$runTest(BasicOperationsSuite.scala:33)
>         at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
> at org.apache.spark.streaming.BasicOperationsSuite.runTest(BasicOperationsSuite.scala:33)
>         at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>         at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>         at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
>         at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
>         at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
>         at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
>         at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
>         at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
>         at org.scalatest.Suite$class.run(Suite.scala:1424)
>         at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
>         at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
>         at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
>         at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
>         at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
>         at org.apache.spark.streaming.BasicOperationsSuite.org$scalatest$BeforeAndAfter$$super$run(BasicOperationsSuite.scala:33)
>         at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
>         at org.apache.spark.streaming.BasicOperationsSuite.run(BasicOperationsSuite.scala:33)
>         at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
>         at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
>         at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
>         at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
>         at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
>         at org.scalatest.Suite$class.run(Suite.scala:1421)
>         at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
>         at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
>         at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
>         at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
>         at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
>         at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
>         at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
>         at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
>         at org.scalatest.tools.Runner$.main(Runner.scala:860)
>         at org.scalatest.tools.Runner.main(Runner.scala)
> - rdd cleanup - input blocks and persisted RDDs
> InputStreamsSuite:
> - socket input stream
> - binary records stream
> [...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org