You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2014/12/08 00:53:13 UTC

[jira] [Comment Edited] (SPARK-4759) Deadlock in complex spark job in local mode with multiple cores

    [ https://issues.apache.org/jira/browse/SPARK-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14237321#comment-14237321 ] 

Andrew Or edited comment on SPARK-4759 at 12/7/14 11:53 PM:
------------------------------------------------------------

Hey I came up with a much smaller reproduction for this from your program.

1. Start spark-shell with --master local[8]
2. Copy and paste the following into your REPL
{code}
    import org.apache.spark.SparkContext
    import org.apache.spark.rdd.RDD

    def makeMyRdd(sc: SparkContext): RDD[Int] = {
      sc.parallelize(1 to 100).repartition(4).cache()
    }

    def runMyJob(sc: SparkContext): Unit = {
      sc.setCheckpointDir("/tmp/spark-test")
      val rdd = makeMyRdd(sc)
      rdd.checkpoint()
      rdd.count()
      val rdd2 = makeMyRdd(sc)
      val newRdd = rdd.union(rdd2).coalesce(4).cache()
      newRdd.checkpoint()
      newRdd.count()
    }
{code}
3. runMyJob(sc)

It should be stuck at task 7/8.


was (Author: andrewor14):
Hey I came up with a much smaller reproduction for this from your program.

1. Start spark-shell with --master local[8]
2. Copy and paste the following into your REPL
{code}
    import org.apache.spark.SparkContext
    import org.apache.spark.rdd.RDD

    def makeMyRdd(sc: SparkContext): RDD[Int] = {
      sc.parallelize(1 to 100).repartition(sc.defaultParallelism).cache()
    }

    def runMyJob(sc: SparkContext): Unit = {
      sc.setCheckpointDir("/tmp/spark-test")
      val rdd = makeMyRdd(sc)
      rdd.checkpoint()
      rdd.count()
      val rdd2 = makeMyRdd(sc)
      val newRdd = rdd.union(rdd2).coalesce(sc.defaultParallelism).cache()
      newRdd.checkpoint()
      newRdd.count()
    }
{code}
3. runMyJob(sc)

It should be stuck at task 7/8.

> Deadlock in complex spark job in local mode with multiple cores
> ---------------------------------------------------------------
>
>                 Key: SPARK-4759
>                 URL: https://issues.apache.org/jira/browse/SPARK-4759
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.1.1, 1.2.0, 1.3.0
>         Environment: Java version "1.7.0_51"
> Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
> Mac OSX 10.10.1
> Using local spark context
>            Reporter: Davis Shepherd
>            Assignee: Andrew Or
>            Priority: Critical
>         Attachments: SparkBugReplicator.scala
>
>
> The attached test class runs two identical jobs that perform some iterative computation on an RDD[(Int, Int)]. This computation involves 
>   # taking new data merging it with the previous result
>   # caching and checkpointing the new result
>   # rinse and repeat
> The first time the job is run, it runs successfully, and the spark context is shut down. The second time the job is run with a new spark context in the same process, the job hangs indefinitely, only having scheduled a subset of the necessary tasks for the final stage.
> Ive been able to produce a test case that reproduces the issue, and I've added some comments where some knockout experimentation has left some breadcrumbs as to where the issue might be.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org