You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@iotdb.apache.org by GitBox <gi...@apache.org> on 2021/02/20 03:41:18 UTC

[GitHub] [iotdb] JackieTien97 commented on a change in pull request #2689: [ISSUE-2688] LRULinkedHashMap does not work as an LRU Cache

JackieTien97 commented on a change in pull request #2689:
URL: https://github.com/apache/iotdb/pull/2689#discussion_r579588087



##########
File path: server/src/main/java/org/apache/iotdb/db/engine/cache/LRULinkedHashMap.java
##########
@@ -21,71 +21,56 @@
 
 import org.apache.iotdb.tsfile.common.cache.Accountable;
 
-import java.util.Iterator;
 import java.util.LinkedHashMap;
-import java.util.Map.Entry;
-import java.util.Set;
+import java.util.Map;
 
 /** This class is an LRU cache. <b>Note: It's not thread safe.</b> */
-public abstract class LRULinkedHashMap<K extends Accountable, V> {
+public abstract class LRULinkedHashMap<K extends Accountable, V> extends LinkedHashMap<K, V> {
 
   private static final float LOAD_FACTOR_MAP = 0.75f;
   private static final int INITIAL_CAPACITY = 128;
-  private static final float RETAIN_PERCENT = 0.9f;
   private static final int MAP_ENTRY_SIZE = 40;
 
-  private final LinkedHashMap<K, V> linkedHashMap;
-
   /** maximum memory threshold. */
   private final long maxMemory;
   /** current used memory. */
   private long usedMemory;
 
-  /** memory size we need to retain while the cache is full */
-  private final long retainMemory;
-
   protected int count = 0;
   protected long averageSize = 0;
 
   public LRULinkedHashMap(long maxMemory) {
-    this.linkedHashMap = new LinkedHashMap<>(INITIAL_CAPACITY, LOAD_FACTOR_MAP);
+    super(INITIAL_CAPACITY, LOAD_FACTOR_MAP, true);
     this.maxMemory = maxMemory;
-    this.retainMemory = (long) (maxMemory * RETAIN_PERCENT);
   }
 
+  @Override
   public V put(K key, V value) {
     long size = calEntrySize(key, value) + MAP_ENTRY_SIZE;
     key.setRamSize(size);
     usedMemory += size;
-    V v = linkedHashMap.put(key, value);
-    if (usedMemory > maxMemory) {
-      Iterator<Entry<K, V>> iterator = linkedHashMap.entrySet().iterator();
-      while (usedMemory > retainMemory && iterator.hasNext()) {
-        Entry<K, V> entry = iterator.next();
-        usedMemory -= entry.getKey().getRamSize();
-        iterator.remove();
-      }
-    }
-    return v;
+    return super.put(key, value);
   }
 
-  public V get(K key) {
-    return linkedHashMap.get(key);
-  }
-
-  public boolean containsKey(K key) {
-    return linkedHashMap.containsKey(key);
+  @Override
+  protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
+    boolean shouldRemove = usedMemory > maxMemory;
+    if (shouldRemove) {
+      usedMemory -= eldest.getKey().getRamSize();
+    }
+    return shouldRemove;

Review comment:
       The previous logic is when the LRUMap is full, we should evict 10% object, your implementation is inconsistent with that.

##########
File path: server/src/main/java/org/apache/iotdb/db/engine/cache/LRULinkedHashMap.java
##########
@@ -21,71 +21,56 @@
 
 import org.apache.iotdb.tsfile.common.cache.Accountable;
 
-import java.util.Iterator;
 import java.util.LinkedHashMap;
-import java.util.Map.Entry;
-import java.util.Set;
+import java.util.Map;
 
 /** This class is an LRU cache. <b>Note: It's not thread safe.</b> */
-public abstract class LRULinkedHashMap<K extends Accountable, V> {
+public abstract class LRULinkedHashMap<K extends Accountable, V> extends LinkedHashMap<K, V> {

Review comment:
       I think we should keep LinkedHashMap as a field instead of extending LinkedHashMap, because we don't need to expose all the methods of LinkedHashMap to users.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org