You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Zhihong Yu (Issue Comment Edited) (JIRA)" <ji...@apache.org> on 2012/02/07 11:14:59 UTC

[jira] [Issue Comment Edited] (HBASE-5074) support checksums in HBase block cache

    [ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13202213#comment-13202213 ] 

Zhihong Yu edited comment on HBASE-5074 at 2/7/12 10:13 AM:
------------------------------------------------------------

@Dhruba:
Your explanation of CRC algorithm selection makes sense. 
                
      was (Author: zhihyu@ebaysf.com):
    @Dhruba:
Your explanation of CDC algorithm selection makes sense. 
                  
> support checksums in HBase block cache
> --------------------------------------
>
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch, D1521.3.patch
>
>
> The current implementation of HDFS stores the data in one block file and the metadata(checksum) in another block file. This means that every read into the HBase block cache actually consumes two disk iops, one to the datafile and one to the checksum file. This is a major problem for scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that the storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira