You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Armbrust (JIRA)" <ji...@apache.org> on 2014/11/03 23:25:36 UTC

[jira] [Updated] (SPARK-3267) Deadlock between ScalaReflectionLock and Data type initialization

     [ https://issues.apache.org/jira/browse/SPARK-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael Armbrust updated SPARK-3267:
------------------------------------
    Target Version/s: 1.3.0  (was: 1.2.0)

> Deadlock between ScalaReflectionLock and Data type initialization
> -----------------------------------------------------------------
>
>                 Key: SPARK-3267
>                 URL: https://issues.apache.org/jira/browse/SPARK-3267
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Aaron Davidson
>            Priority: Critical
>
> Deadlock here:
> {code}
> "Executor task launch worker-0" daemon prio=10 tid=0x00007fab50036000 nid=0x27a in Object.wait() [0x00007fab60c2e000
> ]
>    java.lang.Thread.State: RUNNABLE
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.defaultPrimitive(CodeGenerator.scala:565)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:202)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:195)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.expressionEvaluator(CodeGenerator.scala:4
> 93)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$Evaluate2$2.evaluateAs(CodeGenerator.scal
> a:175)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:304)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:195)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.expressionEvaluator(CodeGenerator.scala:4
> 93)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:314)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:195)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.expressionEvaluator(CodeGenerator.scala:4
> 93)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:313)
>         at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$1.applyOrElse(CodeGenerator.scal
> a:195)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
>         at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
> ...
> {code}
> and
> {code}
> "Executor task launch worker-2" daemon prio=10 tid=0x00007fab100f0800 nid=0x27e in Object.wait() [0x00007fab0eeec000
> ]
>    java.lang.Thread.State: RUNNABLE
>         at org.apache.spark.sql.catalyst.expressions.Cast.cast$lzycompute(Cast.scala:250)
>         - locked <0x000000064e5d9a48> (a org.apache.spark.sql.catalyst.expressions.Cast)
>         at org.apache.spark.sql.catalyst.expressions.Cast.cast(Cast.scala:247)
>         at org.apache.spark.sql.catalyst.expressions.Cast.eval(Cast.scala:263)
>         at org.apache.spark.sql.parquet.ParquetTableScan$$anonfun$execute$2$$anonfun$6.apply(ParquetTableOperations.
> scala:139)
>         at org.apache.spark.sql.parquet.ParquetTableScan$$anonfun$execute$2$$anonfun$6.apply(ParquetTableOperations.
> scala:139)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at org.apache.spark.sql.parquet.ParquetTableScan$$anonfun$execute$2.apply(ParquetTableOperations.scala:139)
>         at org.apache.spark.sql.parquet.ParquetTableScan$$anonfun$execute$2.apply(ParquetTableOperations.scala:126)
>         at org.apache.spark.rdd.NewHadoopRDD$NewHadoopMapPartitionsWithSplitRDD.compute(NewHadoopRDD.scala:197)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:54)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> {code}
> Only happens with code gen on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org