You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Jun Jin (JIRA)" <ji...@apache.org> on 2013/01/05 08:50:13 UTC

[jira] [Created] (MAPREDUCE-4917) multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer

Jun Jin created MAPREDUCE-4917:
----------------------------------

             Summary: multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer
                 Key: MAPREDUCE-4917
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4917
             Project: Hadoop Map/Reduce
          Issue Type: Improvement
          Components: contrib/raid
    Affects Versions: 0.22.0
            Reporter: Jun Jin
            Assignee: Jun Jin
             Fix For: 0.22.0


current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.
the change/fix will be mainly in BlockFixer.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira