You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2008/03/13 23:10:24 UTC
[jira] Commented: (HADOOP-3013) fsck to show (checksum) corrupted
files
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12578512#action_12578512 ]
dhruba borthakur commented on HADOOP-3013:
------------------------------------------
We can enhance the Datanode Block verifier to persistently remember corrupted blocks. This information could be collected by the namenode (through block reports).
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.