You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2009/06/01 02:19:07 UTC

[jira] Commented: (HBASE-1460) Concurrent LRU Block Cache

    [ https://issues.apache.org/jira/browse/HBASE-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714944#action_12714944 ] 

stack commented on HBASE-1460:
------------------------------

CachedBlock needs a class comment as does CachedBlockQueue (Looks like the constructor comment here could serve as the class comment)... and ConcurrentLRUBlockCache

Why this in CachedBlock?

+  public void setBuffer(ByteBuffer buf) {^M
+    this.buf = buf;^M
+  }^M

Can it not be done in constructor?

That align method looks good, like it should be in the bytes or heapsize classes?

When would this be used?

+  public CachedBlock [] get() {^M

Should we remove LRUBlockCache and just rename this ConcurrentLRUBlockCache as LRUBlockCache?  I don't see any point in our keeping up two caches.

Remove the below rather than comment out I'd say:

{code}
+^M
+///**^M
+//* Constructor.^M
+//* @param maxBytes maximum bytes this cache can hold^M
+//* @param conf hbase configuration^M
+//*/^M
{code}

You added a test!!!!  Two of them!!

Patch looks great.

Here are comments carried over from 1192 that were ignored (smile):

+ Can we make block cache size be percentage of heap rather than hard number?  I suppose it means cache needs to be able to grow as heap grows.

> Concurrent LRU Block Cache
> --------------------------
>
>                 Key: HBASE-1460
>                 URL: https://issues.apache.org/jira/browse/HBASE-1460
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: io
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>             Fix For: 0.20.0
>
>         Attachments: HBASE-1460-v1.patch
>
>
> The LRU-based block cache that will be committed in HBASE-1192 is thread-safe but contains a big lock on the hash map.  Under high load, the block cache will be hit very heavily from a number of threads, so it needs to be built to handle massive concurrency.
> This issue aims to implement a new block cache with LRU eviction, but backed by a ConcurrentHashMap and a separate eviction thread.  Influence will be drawn from Solr's ConcurrentLRUCache, however there are major differences because solr treats all cached elements as equal size whereas we are dependent on our HeapSize interface with realistic (though approximate) heap usage.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.