You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/09/14 22:40:59 UTC

[GitHub] [spark] ankurdave commented on a change in pull request #29753: [SPARK-32872][CORE][2.4] Prevent BytesToBytesMap at MAX_CAPACITY from exceeding growth threshold

ankurdave commented on a change in pull request #29753:
URL: https://github.com/apache/spark/pull/29753#discussion_r488276308



##########
File path: core/src/main/java/org/apache/spark/unsafe/map/BytesToBytesMap.java
##########
@@ -763,7 +763,7 @@ public boolean append(Object kbase, long koff, int klen, Object vbase, long voff
           if (longArray.size() / 2 < MAX_CAPACITY) {
             try {
               growAndRehash();
-            } catch (SparkOutOfMemoryError oom) {
+            } catch (OutOfMemoryError oom) {

Review comment:
       Makes sense. In addition to the CI tests, I just ran the manual test in the PR description before and after this fix and confirmed that the bug was present in 2.4 and that this PR fixes it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org