You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jim Kellerman (JIRA)" <ji...@apache.org> on 2008/02/05 01:03:07 UTC

[jira] Reopened: (HBASE-288) Add in-memory caching of data

     [ https://issues.apache.org/jira/browse/HBASE-288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jim Kellerman reopened HBASE-288:
---------------------------------

      Assignee: Jim Kellerman

Reopening issue. Patch was not fully backed out of HBase.

> Add in-memory caching of data
> -----------------------------
>
>                 Key: HBASE-288
>                 URL: https://issues.apache.org/jira/browse/HBASE-288
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Jim Kellerman
>            Assignee: Jim Kellerman
>            Priority: Trivial
>         Attachments: commons-collections-3.2.jar, hadoop-blockcache-v2.patch, hadoop-blockcache-v3.patch, hadoop-blockcache-v4.1.patch, hadoop-blockcache-v4.patch, hadoop-blockcache-v5.patch, hadoop-blockcache-v6.patch, hadoop-blockcache.patch
>
>
> Bigtable provides two in-memory caches: one for row/column data and one for disk block caches.
> The size of each cache should be configurable, data should be loaded lazily, and the cache managed by an LRU mechanism.
> One complication of the block cache is that all data is read through a SequenceFile.Reader which ultimately reads data off of disk via a RPC proxy for ClientProtocol. This would imply that the block caching would have to be pushed down to either the DFSClient or SequenceFile.Reader

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.