You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "zhangwei (JIRA)" <ji...@apache.org> on 2009/01/13 11:01:02 UTC

[jira] Updated: (HADOOP-5019) add querying block's info in the fsck facility

     [ https://issues.apache.org/jira/browse/HADOOP-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangwei updated HADOOP-5019:
-----------------------------

    Attachment: HADOOP-5019.patch

If the path arg starts with "blk_" ,then fsck examine the followed blkid  and ignore the genernation stamp.
Then get the block's inode through blocksMap.getINode(b) ,
and get the inode's filename and it's parnet's filename recursively to fetch the full path string 
and print out the full path ,datanode's locations and permission status finaly.


Or if the path not start with it,it go to check as nomal.

> add querying block's info in the fsck facility
> ----------------------------------------------
>
>                 Key: HADOOP-5019
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5019
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: zhangwei
>            Priority: Minor
>         Attachments: HADOOP-5019.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> As now the fsck can do pretty well,but when the developer happened to the log such Block blk_28622148 is not valid.etc
> We wish to know which file and the datanodes the block belongs to.It  can be solved by running "bin/hadoop fsck -files -blocks -locations / | grep <blockid>" ,but as mentioned early in the HADOOP-4945 ,it's not an effective way in a big product cluster.
> so maybe we could do something to let the fsck more convenience .

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.