You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:23:44 UTC

[jira] [Updated] (SPARK-11943) Rapidly starting and stopping SparkContexts in local-cluster mode may cause JVM to exit

     [ https://issues.apache.org/jira/browse/SPARK-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-11943:
---------------------------------
    Labels: bulk-closed  (was: )

> Rapidly starting and stopping SparkContexts in local-cluster mode may cause JVM to exit
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-11943
>                 URL: https://issues.apache.org/jira/browse/SPARK-11943
>             Project: Spark
>          Issue Type: Bug
>          Components: Tests
>            Reporter: Josh Rosen
>            Priority: Minor
>              Labels: bulk-closed
>
> While trying to performance-profile the creation of SparkContexts in {{local-cluster}} mode, I ran into a weird bug which could cause {{System.exit()}} to be called. In {{local-cluster}} mode, the Master and Worker processes are run in the driver JVM. Inside of Master and Worker, there are error-handling paths which call {{System.exit()}} in certain cases. If you start and stop SparkContexts very quickly in {{local-cluster}} mode, then I think you can hit race conditions which cause the {{System.exit()}} error-handling paths to be hit.
> Here's a test case which consistently reproduces the problem on my laptop:
> {code}
> class LocalClusterLaunchTimeBenchmark extends SparkFunSuite with LocalSparkContext with Logging {
>   test("benchmark local cluster launching and teardown") {
>     for (i <- 1 to 100) {
>       val startTime = System.currentTimeMillis()
>       val conf = new SparkConf()
>         .set("spark.ui.enabled", "false")
>         .set("spark.testing", "true")
>         .set("spark.master.rest.enabled", "false")
>       sc = new SparkContext("local-cluster[2,1,1024]", s"test-$i", conf)
>       log.info("--STOPPING--")
>       sc.stop()
>       log.info("--STOPPED--")
>       val duration = startTime - System.currentTimeMillis()
>       println("Duration" + duration)
>     }
>   }
> }
> {code}
> I'm not sure whether this underlying problem has led to flakiness in Jenkins. One test which would be susceptible to this is DistributedSuite's "local-cluster format" test, which winds up spinning up a bunch of SparkContexts to validate the parsing of {{local-cluster}} patterns: https://github.com/apache/spark/blob/4021a28ac30b65cb61cf1e041253847253a2d89f/core/src/test/scala/org/apache/spark/DistributedSuite.scala#L52. We shouldn't be testing this functionality by creating new contexts, so this test should be refactored anyways, but the Master/Worker System.exit() would explain things if we've ever observed this test to fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org