You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2008/10/03 18:54:44 UTC

[jira] Resolved: (HBASE-911) Minimize filesystem footprint

     [ https://issues.apache.org/jira/browse/HBASE-911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack resolved HBASE-911.
-------------------------

    Resolution: Invalid

Resolving as invalid.

> Minimize filesystem footprint
> -----------------------------
>
>                 Key: HBASE-911
>                 URL: https://issues.apache.org/jira/browse/HBASE-911
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>
> This issue is about looking into how much space in filesystem hbases uses.  Daniel Ploeg suggests that hbase is profligate in its use of space in hdfs.   Given that block sizes by default are 64MB, and that every time hbase writes a store file that its accompanied by an index file and a very small metadata file, thats 3*64MB even if the file is empty (TODO: Prove this).  The situation is aggrevated by the fact that hbase does a flush of whatever is in memory every 30 minutes to minimize loss in the absence of appends; this latter action makes for lots of small files.
> The solution to the above is implement append so optional flush is not necessary and a file format that aggregates info, index and data all in the one file.   Short-term, we should set block size on the info/metadata file down to 4k or some such small size and look into doing likewise for the mapfile index.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.