You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2007/05/11 23:10:15 UTC

[jira] Commented: (HADOOP-1134) Block level CRCs in HDFS

    [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12495168 ] 

Raghu Angadi commented on HADOOP-1134:
--------------------------------------

When a block's data does not start or end at bytesPerChecksum({{bpc}}), I am planning to do the following in Datanode:

Assume blocks data is divided like this :  {{x + n * bpc + y}}, where {{x}} and {{y}} are less than {{bpc}}.

# Using DFSClient fetch {{bcp-x}} bytes before the block and {{bpc-y}} bytes after the block.
# Just like for rest of blocks, fetch {{.crc}} file data for the block as if the block had {{bpc + n*bpc + bpc}} bytes. (i.e. DN fetches crc data from multiple replicas and selects the majority that match).
# Read the actual block and verify that data matches and generate new CRC data with same {{bpc}}. 
# When there is a mismatch, we could have wrong CRC just for the affected range or we could have null checksum for the entire block. I am thinking of having wrong CRC value just for the affected range since it does not increase the code by much.



> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.