You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2017/07/24 02:37:01 UTC
[jira] [Resolved] (SPARK-21269) MetadataFetchFailedException:
Missing an output location for shuffle 0
[ https://issues.apache.org/jira/browse/SPARK-21269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-21269.
----------------------------------
Resolution: Cannot Reproduce
I am resolving this per https://github.com/apache/spark/pull/18490#issuecomment-312387651
> MetadataFetchFailedException: Missing an output location for shuffle 0
> ----------------------------------------------------------------------
>
> Key: SPARK-21269
> URL: https://issues.apache.org/jira/browse/SPARK-21269
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.3.0
> Reporter: Yuming Wang
>
> Spark *cluster* can reproduce, *local* can't:
> 1. Start a spark context with {{spark.reducer.maxReqSizeShuffleToMem=1K}} and {{spark.serializer=org.apache.spark.serializer.KryoSerializer}}:
> {code:actionscript}
> $ spark-shell --conf spark.reducer.maxReqSizeShuffleToMem=1K --conf spark.serializer=org.apache.spark.serializer.KryoSerializer
> {code}
> 2. A shuffle:
> {code:actionscript}
> scala> sc.parallelize(0 until 3000000, 10).repartition(2001).count()
> {code}
> The error messages:
> {noformat}
> 17/06/30 21:33:29 WARN TaskSetManager: Lost task 117.0 in stage 1.0 (TID 127, jqhadoop-test47-27.int.yihaodian.com, executor 140): FetchFailed(null, shuffleId=0, mapId=-1, reduceId=117, message=
> org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
> at org.apache.spark.MapOutputTracker$$anonfun$convertMapStatuses$2.apply(MapOutputTracker.scala:808)
> at org.apache.spark.MapOutputTracker$$anonfun$convertMapStatuses$2.apply(MapOutputTracker.scala:804)
> at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at org.apache.spark.MapOutputTracker$.convertMapStatuses(MapOutputTracker.scala:804)
> at org.apache.spark.MapOutputTrackerWorker.getMapSizesByExecutorId(MapOutputTracker.scala:618)
> at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
> at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:105)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
> at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:100)
> at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:99)
> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
> at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1802)
> at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1159)
> at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1159)
> at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2065)
> at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2065)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> at org.apache.spark.scheduler.Task.run(Task.scala:108)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:341)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org