You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "anishshri-db (via GitHub)" <gi...@apache.org> on 2023/09/01 06:22:54 UTC

[GitHub] [spark] anishshri-db commented on a diff in pull request #42742: [SPARK-45025] Allow block manager memory store iterator to handle thread interrupt and perform task completion gracefully

anishshri-db commented on code in PR #42742:
URL: https://github.com/apache/spark/pull/42742#discussion_r1312619455


##########
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala:
##########
@@ -220,7 +220,8 @@ private[spark] class MemoryStore(
     }
 
     // Unroll this block safely, checking whether we have exceeded our threshold periodically
-    while (values.hasNext && keepUnrolling) {
+    // and if no thread interrupts have been received.
+    while (values.hasNext && keepUnrolling && !Thread.currentThread().isInterrupted) {

Review Comment:
   @mridulm - if we put this check inside the `if` condition below, it seems we will only set the `interrupted` flag the next time the memory check period expires ? Seems like the default value for the check period is set to 16. So, if the interrupt is received after the first element is unrolled, we will wait for 15 more elements and also do extra work in the interim, that we probably have to dispose later anyway ? So might be better to check within the `while` loop and exit earlier as we are doing in the PR ?
   
   ```
         valuesHolder.storeValue(values.next())
         if (elementsUnrolled % memoryCheckPeriod == 0) {
           val currentSize = valuesHolder.estimatedSize()
           // If our vector's size has exceeded the threshold, request more memory
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org