You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Jim Kellerman (JIRA)" <ji...@apache.org> on 2008/01/24 00:46:34 UTC
[jira] Assigned: (HADOOP-1398) Add in-memory caching of data
[ https://issues.apache.org/jira/browse/HADOOP-1398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jim Kellerman reassigned HADOOP-1398:
-------------------------------------
Assignee: Tom White
> Add in-memory caching of data
> -----------------------------
>
> Key: HADOOP-1398
> URL: https://issues.apache.org/jira/browse/HADOOP-1398
> Project: Hadoop Core
> Issue Type: New Feature
> Components: contrib/hbase
> Reporter: Jim Kellerman
> Assignee: Tom White
> Priority: Trivial
> Attachments: commons-collections-3.2.jar, hadoop-blockcache-v2.patch, hadoop-blockcache-v3.patch, hadoop-blockcache-v4.1.patch, hadoop-blockcache-v4.patch, hadoop-blockcache.patch
>
>
> Bigtable provides two in-memory caches: one for row/column data and one for disk block caches.
> The size of each cache should be configurable, data should be loaded lazily, and the cache managed by an LRU mechanism.
> One complication of the block cache is that all data is read through a SequenceFile.Reader which ultimately reads data off of disk via a RPC proxy for ClientProtocol. This would imply that the block caching would have to be pushed down to either the DFSClient or SequenceFile.Reader
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.