You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "Ngone51 (via GitHub)" <gi...@apache.org> on 2023/02/01 07:43:04 UTC

[GitHub] [spark] Ngone51 commented on a diff in pull request #39459: [SPARK-41497][CORE] Fixing accumulator undercount in the case of the retry task with rdd cache

Ngone51 commented on code in PR #39459:
URL: https://github.com/apache/spark/pull/39459#discussion_r1092858364


##########
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##########
@@ -1325,14 +1325,47 @@ private[spark] class BlockManager(
     blockInfoManager.releaseAllLocksForTask(taskAttemptId)
   }
 
+  /**
+   * Retrieve the given rdd block if it exists and is visible, otherwise call the provided
+   * `makeIterator` method to compute the block, persist it, and return its values.
+   *
+   * @return either a BlockResult if the block was successfully cached, or an iterator if the block
+   *         could not be cached.
+   */
+  def getOrElseUpdateRDDBlock[T](
+      taskId: Long,
+      blockId: RDDBlockId,
+      level: StorageLevel,
+      classTag: ClassTag[T],
+      makeIterator: () => Iterator[T]): Either[BlockResult, Iterator[T]] = {
+    val isCacheVisible = isRDDBlockVisible(blockId)
+    var computed: Boolean = false
+    val getIterator = () => {
+      computed = true
+      makeIterator()
+    }
+
+    val res = getOrElseUpdate(blockId, level, classTag, getIterator)
+    if (res.isLeft && !isCacheVisible) {
+      if (!computed) {
+        // Loaded from cache, re-compute to update accumulators.
+        makeIterator()
+      }

Review Comment:
   > We do not disable speculative execution for indeterminate computation currently - and data generated from two task attempts can vary (which could be cached, and so differ for same partition)
   
   Right..in this case, the rdd block locations for different data can be attached to the same rdd block id. So the reader could get the different data for the same rdd block, which makes the rdd block data also indeterminate.
   
    > We do not invalidate previously cached data, when there is a stage re-execution due to failures for an indeterminate computation,
   
   This seems to be a missing point in the indeterminate computation framework. @cloud-fan could you help confirm?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org