You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Wellington Chevreuil (JIRA)" <ji...@apache.org> on 2017/07/21 14:46:00 UTC

[jira] [Created] (HDFS-12182) BlockManager.metSave does not distinguish between "under replicated" and "corrupt" blocks

Wellington Chevreuil created HDFS-12182:
-------------------------------------------

             Summary: BlockManager.metSave does not distinguish between "under replicated" and "corrupt" blocks
                 Key: HDFS-12182
                 URL: https://issues.apache.org/jira/browse/HDFS-12182
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs
            Reporter: Wellington Chevreuil
            Priority: Trivial
             Fix For: 3.0.0-alpha3


Currently, *BlockManager.metaSave* method (which is called by "-metasave" dfs CLI command) reports both "under replicated" and "corrupt" blocks under same metric *Metasave: Blocks waiting for reconstruction:* as shown on below code snippet:

{noformat}
   synchronized (neededReconstruction) {
      out.println("Metasave: Blocks waiting for reconstruction: "
          + neededReconstruction.size());
      for (Block block : neededReconstruction) {
        dumpBlockMeta(block, out);
      }
    }
{noformat}

*neededReconstruction* is an instance of *LowRedundancyBlocks*, which actually wraps 5 priority queues currently. 4 of these queues store different under replicated scenarios, but the 5th one is dedicated for corrupt blocks. 

Thus, metasave report may suggest some corrupt blocks are just under replicated. This can be misleading for admins and operators trying to track block corruption issues, and/or other issues related to *BlockManager* metrics.

I would like to propose a patch with trivial changes that would report corrupt blocks separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org