You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@crail.apache.org by "Michael Kaufmann (JIRA)" <ji...@apache.org> on 2018/05/18 15:01:00 UTC

[jira] [Created] (CRAIL-37) Map failed

Michael Kaufmann created CRAIL-37:
-------------------------------------

             Summary: Map failed
                 Key: CRAIL-37
                 URL: https://issues.apache.org/jira/browse/CRAIL-37
             Project: Apache Crail
          Issue Type: Bug
            Reporter: Michael Kaufmann


Crail fails to clean up the cache directory (I assume) if applications crash. This needs to be fixed somehow since this happens every now and then (and very often for me because I'm debugging some other, unrelated issues). I think this makes it impossible to use Crail as it is right now in a production environment.

As a side node - map failed is not a very useful error message and without your own personal [~pepperjo] you might really get stuck there. As a first step, I would expect an understandable error message that indicates what the problem is and how it can be resolved.

 

{{Exception in thread "main" java.util.concurrent.ExecutionException: java.io.IOException: Map failed}}
{{        at org.apache.crail.core.CoreMetaDataOperation.get(CoreMetaDataOperation.java:97)}}
{{        at org.apache.spark.storage.CrailDispatcher.org$apache$spark$storage$CrailDispatcher$$init(CrailDispatcher.scala:127)}}
{{        at org.apache.spark.storage.CrailDispatcher$.get(CrailDispatcher.scala:613)}}
{{        at org.apache.spark.shuffle.crail.CrailShuffleManager.registerShuffle(CrailShuffleManager.scala:52)}}
{{        at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:90)}}
{{        at org.apache.spark.rdd.CoGroupedRDD$$anonfun$getDependencies$1.apply(CoGroupedRDD.scala:106)}}
{{        at org.apache.spark.rdd.CoGroupedRDD$$anonfun$getDependencies$1.apply(CoGroupedRDD.scala:100)}}
{{        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)}}
{{        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)}}
{{        at scala.collection.immutable.List.foreach(List.scala:381)}}
{{        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)}}
{{        at scala.collection.immutable.List.map(List.scala:285)}}
{{        at org.apache.spark.rdd.CoGroupedRDD.getDependencies(CoGroupedRDD.scala:100)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:239)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:237)}}
{{        at scala.Option.getOrElse(Option.scala:121)}}
{{        at org.apache.spark.rdd.RDD.dependencies(RDD.scala:237)}}
{{        at org.apache.spark.rdd.CoGroupedRDD$$anonfun$getPartitions$1$$anonfun$apply$mcVI$sp$1.apply(CoGroupedRDD.scala:118)}}
{{        at org.apache.spark.rdd.CoGroupedRDD$$anonfun$getPartitions$1$$anonfun$apply$mcVI$sp$1.apply(CoGroupedRDD.scala:116)}}
{{        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)}}
{{        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)}}
{{        at scala.collection.immutable.List.foreach(List.scala:381)}}
{{        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)}}
{{        at scala.collection.immutable.List.map(List.scala:285)}}
{{        at org.apache.spark.rdd.CoGroupedRDD$$anonfun$getPartitions$1.apply$mcVI$sp(CoGroupedRDD.scala:116)}}
{{        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)}}
{{        at org.apache.spark.rdd.CoGroupedRDD.getPartitions(CoGroupedRDD.scala:114)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)}}
{{        at scala.Option.getOrElse(Option.scala:121)}}
{{        at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)}}
{{        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)}}
{{        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)