You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Lars Hofhansl (JIRA)" <ji...@apache.org> on 2013/12/03 20:43:38 UTC

[jira] [Commented] (HBASE-10052) use HDFS advisory caching to avoid caching HFiles that are not going to be read again (because they are being compacted)

    [ https://issues.apache.org/jira/browse/HBASE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13838082#comment-13838082 ] 

Lars Hofhansl commented on HBASE-10052:
---------------------------------------

That is a good idea. Just checked the API and since we can set this on an existing InputStream (we open all files ahead of time) we can just set it on the old files before we do a compaction.

> use HDFS advisory caching to avoid caching HFiles that are not going to be read again (because they are being compacted)
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-10052
>                 URL: https://issues.apache.org/jira/browse/HBASE-10052
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>             Fix For: 0.98.0
>
>
> HBase can benefit from doing dropbehind during compaction since compacted files are not read again.  HDFS advisory caching, introduced in HDFS-4817, can help here.  The right API here is {{DataInputStream#setDropBehind}}.



--
This message was sent by Atlassian JIRA
(v6.1#6144)