You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Wendy Chien (JIRA)" <ji...@apache.org> on 2007/01/10 02:56:27 UTC

[jira] Updated: (HADOOP-855) HDFS should repair corrupted files

     [ https://issues.apache.org/jira/browse/HADOOP-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wendy Chien updated HADOOP-855:
-------------------------------

    Attachment: hadoop-855-5.patch

> HDFS should repair corrupted files
> ----------------------------------
>
>                 Key: HADOOP-855
>                 URL: https://issues.apache.org/jira/browse/HADOOP-855
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Wendy Chien
>         Assigned To: Wendy Chien
>         Attachments: hadoop-855-5.patch
>
>
> While reading if we discover a mismatch between a block and checksum, we want to report this back to the namenode to delete the corrupted block or crc.
> To implement this, we need to do the following:
> DFSInputStream
> 1. move DFSInputStream out of DFSClient
> 2. add member variable to keep track of current datanode (the chosen node)
> DistributedFileSystem
> 1. change reportChecksumFailure parameter crc from int to FSInputStream (needed to be able to delete it). 
> 2. determine specific block and datanode from DFSInputStream passed to reportChecksumFailure  
> 3. call namenode to delete block/crc vis DFSClient
> ClientProtocol
> 1. add method to ask namenode to delete certain blocks on specifc datanode.
> Namenode
> 1. add ability to delete certain blocks on specific datanode

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira