You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/04/02 16:28:51 UTC

[GitHub] [spark] viirya commented on a change in pull request #32033: [SPARK-34939][CORE] Throw fetch failure exception when unable to deserialize broadcasted map statuses

viirya commented on a change in pull request #32033:
URL: https://github.com/apache/spark/pull/32033#discussion_r606312533



##########
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##########
@@ -953,13 +959,19 @@ private[spark] object MapOutputTracker extends Logging {
       case DIRECT =>
         deserializeObject(bytes, 1, bytes.length - 1).asInstanceOf[Array[MapStatus]]
       case BROADCAST =>
-        // deserialize the Broadcast, pull .value array out of it, and then deserialize that
-        val bcast = deserializeObject(bytes, 1, bytes.length - 1).
-          asInstanceOf[Broadcast[Array[Byte]]]
-        logInfo("Broadcast mapstatuses size = " + bytes.length +
-          ", actual size = " + bcast.value.length)
-        // Important - ignore the DIRECT tag ! Start from offset 1
-        deserializeObject(bcast.value, 1, bcast.value.length - 1).asInstanceOf[Array[MapStatus]]
+        try {
+          // deserialize the Broadcast, pull .value array out of it, and then deserialize that
+          val bcast = deserializeObject(bytes, 1, bytes.length - 1).
+            asInstanceOf[Broadcast[Array[Byte]]]
+          logInfo("Broadcast mapstatuses size = " + bytes.length +
+            ", actual size = " + bcast.value.length)

Review comment:
       This is for the need of writing the test case. In the test case, if we call `getStatuses`, the mapoutput tracker worker will ask tracker master for new broadcasted value. So we cannot test the situation we need.

##########
File path: core/src/main/scala/org/apache/spark/shuffle/FetchFailedException.scala
##########
@@ -68,5 +68,6 @@ private[spark] class FetchFailedException(
 private[spark] class MetadataFetchFailedException(
     shuffleId: Int,
     reduceId: Int,
-    message: String)
-  extends FetchFailedException(null, shuffleId, -1L, -1, reduceId, message)
+    message: String,
+    cause: Throwable = null)

Review comment:
       Sure, I just want to keep original stack trace.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org