You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2008/02/04 22:09:09 UTC

[jira] Commented: (HBASE-288) Add in-memory caching of data

    [ https://issues.apache.org/jira/browse/HBASE-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12565529#action_12565529 ] 

stack commented on HBASE-288:
-----------------------------

Tom, I backed out the hbase component of this patch temporarily.  Notion is that we get hbase fixed up over in its new svn home, then we branch.  We want the branch to go against hadoop-0.16.0.  Once branch is done, we'll put this patch back into hbase TRUNK (With this patch in place, hbase requires post 0.16.0 hadoop).

> Add in-memory caching of data
> -----------------------------
>
>                 Key: HBASE-288
>                 URL: https://issues.apache.org/jira/browse/HBASE-288
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Jim Kellerman
>            Priority: Trivial
>         Attachments: commons-collections-3.2.jar, hadoop-blockcache-v2.patch, hadoop-blockcache-v3.patch, hadoop-blockcache-v4.1.patch, hadoop-blockcache-v4.patch, hadoop-blockcache-v5.patch, hadoop-blockcache-v6.patch, hadoop-blockcache.patch
>
>
> Bigtable provides two in-memory caches: one for row/column data and one for disk block caches.
> The size of each cache should be configurable, data should be loaded lazily, and the cache managed by an LRU mechanism.
> One complication of the block cache is that all data is read through a SequenceFile.Reader which ultimately reads data off of disk via a RPC proxy for ClientProtocol. This would imply that the block caching would have to be pushed down to either the DFSClient or SequenceFile.Reader

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.