You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Koji Noguchi (JIRA)" <ji...@apache.org> on 2008/03/13 23:06:25 UTC
[jira] Created: (HADOOP-3013) fsck to show (checksum) corrupted
files
fsck to show (checksum) corrupted files
---------------------------------------
Key: HADOOP-3013
URL: https://issues.apache.org/jira/browse/HADOOP-3013
Project: Hadoop Core
Issue Type: Improvement
Components: dfs
Reporter: Koji Noguchi
Currently, only way to find files with all replica being corrupt is when we read those files.
Instead, can we have fsck report those?
(Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "Hudson (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597462#action_12597462 ]
Hudson commented on HADOOP-3013:
--------------------------------
Integrated in Hadoop-trunk #493 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/493/])
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lohit vijayarenu updated HADOOP-3013:
-------------------------------------
Release Note: fsck reports corrupt blocks in the system.
Status: Patch Available (was: Open)
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12578512#action_12578512 ]
dhruba borthakur commented on HADOOP-3013:
------------------------------------------
We can enhance the Datanode Block verifier to persistently remember corrupted blocks. This information could be collected by the namenode (through block reports).
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lohit vijayarenu reassigned HADOOP-3013:
----------------------------------------
Assignee: lohit vijayarenu
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597214#action_12597214 ]
Raghu Angadi commented on HADOOP-3013:
--------------------------------------
+1. Patch looks good.
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597083#action_12597083 ]
Hadoop QA commented on HADOOP-3013:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12382096/HADOOP-3013-1.patch
against trunk revision 656525.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 4 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2478/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2478/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2478/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2478/console
This message is automatically generated.
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "Chris Douglas (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chris Douglas updated HADOOP-3013:
----------------------------------
Resolution: Fixed
Hadoop Flags: [Reviewed]
Status: Resolved (was: Patch Available)
I just committed this. Thanks, Lohit
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-3013) fsck to show (checksum) corrupted
files
Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lohit vijayarenu updated HADOOP-3013:
-------------------------------------
Attachment: HADOOP-3013-1.patch
Attached patch now reports corrupt blocks via fsck. I think we should have not special option, but rather show them by default as done for MISSING blocks. Included test case to check the report of such blocks.
> fsck to show (checksum) corrupted files
> ---------------------------------------
>
> Key: HADOOP-3013
> URL: https://issues.apache.org/jira/browse/HADOOP-3013
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Attachments: HADOOP-3013-1.patch
>
>
> Currently, only way to find files with all replica being corrupt is when we read those files.
> Instead, can we have fsck report those?
> (Using the corrupted blocks found by the periodic verification...?)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.