You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@bookkeeper.apache.org by GitBox <gi...@apache.org> on 2022/02/17 20:17:00 UTC

[GitHub] [bookkeeper] merlimat commented on a change in pull request #3061: Fix memory leakļ¼šSupport shrinking in ConcurrentLongLongPairHashMap

merlimat commented on a change in pull request #3061:
URL: https://github.com/apache/bookkeeper/pull/3061#discussion_r809394336



##########
File path: bookkeeper-server/src/main/java/org/apache/bookkeeper/util/collections/ConcurrentLongLongPairHashMap.java
##########
@@ -376,7 +380,16 @@ private boolean remove(long key1, long key2, long value1, long value2, int keyHa
                 }
 
             } finally {
-                unlockWrite(stamp);
+                if (size < resizeThresholdBelow) {

Review comment:
       > I would move this part inside the try block
   
   @eolivelli  In other parts of this class it was placed in the finally block because there are multiple exit points in the method, and the code was becoming easier to just place it there so it gets executed in all these code branches.

##########
File path: bookkeeper-server/src/main/java/org/apache/bookkeeper/util/collections/ConcurrentLongLongPairHashMap.java
##########
@@ -388,6 +401,18 @@ private void cleanBucket(int bucket) {
                 table[bucket + 2] = ValueNotFound;
                 table[bucket + 3] = ValueNotFound;
                 --usedBuckets;
+
+                // Reduce unnecessary rehash

Review comment:
       ```suggestion
                   // Cleanup all the buckets that were in `DeletedKey` state, so that we can reduce unnecessary expansions
   ```

##########
File path: bookkeeper-server/src/main/java/org/apache/bookkeeper/util/collections/ConcurrentLongLongPairHashMap.java
##########
@@ -336,9 +339,10 @@ boolean put(long key1, long key2, long value1, long value2, int keyHash, boolean
                     bucket = (bucket + 4) & (table.length - 1);
                 }
             } finally {
-                if (usedBuckets > resizeThreshold) {
+                if (usedBuckets > resizeThresholdUp) {
                     try {
-                        rehash();
+                        // Expand the hashmap
+                        rehash(capacity * 2);

Review comment:
       The existing "double the size" strategy was probably not the best one for all use cases either, as it can end up wasting a considerable amount of memory in empty buckets. 
   
   We could leave it configurable (both the step up and and down), to accommodate different needs.

##########
File path: bookkeeper-server/src/main/java/org/apache/bookkeeper/util/collections/ConcurrentLongLongPairHashMap.java
##########
@@ -48,6 +48,7 @@
     private static final long ValueNotFound = -1L;
 
     private static final float MapFillFactor = 0.66f;
+    private static final float MapIdleFactor = 0.25f;

Review comment:
       I think we should be cautious in avoiding constantly flickering between shrink & expand. We should try to use a smaller threshold here to limit that. Maybe 0.15? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@bookkeeper.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org