You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "ukby1234 (via GitHub)" <gi...@apache.org> on 2023/08/12 04:04:03 UTC

[GitHub] [spark] ukby1234 commented on a diff in pull request #42296: [SPARK-44635][CORE] Handle shuffle fetch failures in decommissions

ukby1234 commented on code in PR #42296:
URL: https://github.com/apache/spark/pull/42296#discussion_r1292045322


##########
core/src/main/scala/org/apache/spark/MapOutputTracker.scala:
##########
@@ -1288,6 +1288,30 @@ private[spark] class MapOutputTrackerWorker(conf: SparkConf) extends MapOutputTr
     mapSizesByExecutorId.iter
   }
 
+  def getMapOutputLocationWithRefresh(
+      shuffleId: Int,
+      mapId: Long,
+      prevLocation: BlockManagerId): BlockManagerId = {
+    // Try to get the cached location first in case other concurrent tasks
+    // fetched the fresh location already
+    var currentLocationOpt = getMapOutputLocation(shuffleId, mapId)
+    if (currentLocationOpt.isDefined && currentLocationOpt.get == prevLocation) {
+      // Address in the cache unchanged. Try to clean cache and get a fresh location
+      unregisterShuffle(shuffleId)
+      currentLocationOpt = getMapOutputLocation(shuffleId, mapId)
+    }
+    if (currentLocationOpt.isEmpty) {
+      throw new MetadataFetchFailedException(shuffleId, -1,
+        message = s"Failed to get map output location for shuffleId $shuffleId, mapId $mapId")
+    }
+    currentLocationOpt.get

Review Comment:
   When shuffle fallback storage is enabled, this `currentLocationOpt`can be the `FALLBACK_BLOCK_MANAGER_ID`, and `DeferFetchRequestResult` below doesn't handle this special case. 
   so either 1) check the FetchRequest for fallback storage special ID 2)rewrite the RPC address to localhost so we get the blocks inside the fallback storage. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org