You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/09/13 21:40:20 UTC

[GitHub] [spark] ankurdave commented on a change in pull request #29744: [SPARK-32872][CORE] Prevent BytesToBytesMap from exceeding growth threshold near MAX_CAPACITY

ankurdave commented on a change in pull request #29744:
URL: https://github.com/apache/spark/pull/29744#discussion_r487580646



##########
File path: core/src/main/java/org/apache/spark/unsafe/map/BytesToBytesMap.java
##########
@@ -816,6 +816,10 @@ public boolean append(Object kbase, long koff, int klen, Object vbase, long voff
           } catch (SparkOutOfMemoryError oom) {
             canGrowArray = false;
           }
+        } else if (numKeys >= growthThreshold && longArray.size() / 2 >= MAX_CAPACITY) {
+          // The map needs to grow, but this would cause it to exceed MAX_CAPACITY. Prevent the map
+          // from accepting any more new elements.
+          canGrowArray = false;

Review comment:
       Hmm, perhaps the comment's wording is confusing. I intended to convey that, if the map were to grow, then its capacity would exceed MAX_CAPACITY. Therefore it cannot grow and we must prevent it from accepting any more new elements.
   
   I reworded the comment to clarify this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org