You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@orc.apache.org by GitBox <gi...@apache.org> on 2021/07/17 23:37:35 UTC

[GitHub] [orc] kbendick commented on a change in pull request #751: ORC-848: Recycle Internal Buffer in StringHashTableDictionary

kbendick commented on a change in pull request #751:
URL: https://github.com/apache/orc/pull/751#discussion_r671756394



##########
File path: java/core/src/java/org/apache/orc/impl/StringHashTableDictionary.java
##########
@@ -75,18 +75,19 @@ public StringHashTableDictionary(int initialCapacity, float loadFactor) {
     this.capacity = initialCapacity;
     this.loadFactor = loadFactor;
     this.keyOffsets = new DynamicIntArray(initialCapacity);
-    initHashBuckets(initialCapacity);
+    initHashBuckets(false);
     this.threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
   }
 
-  private void initHashBuckets(int capacity) {
-    DynamicIntArray[] buckets = new DynamicIntArray[capacity];
-    for (int i = 0; i < capacity; i++) {
+  private void initHashBuckets(final boolean reinit) {
+    final DynamicIntArray[] newBuckets =
+        (reinit) ? this.hashBuckets : new DynamicIntArray[this.capacity];

Review comment:
       Do we need to call some kind of `clear` or null out the current elements to ensure they're able to be GC'd as soon as possible? I've noticed that this new `hash` dictionary uses more memory than the `rbtree` dictionary in some tests I'm working on.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@orc.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org