You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "ZanderXu (Jira)" <ji...@apache.org> on 2022/05/25 02:50:00 UTC

[jira] [Created] (HDFS-16593) Correct inaccurate BlocksRemoved metric on DataNode side

ZanderXu created HDFS-16593:
-------------------------------

             Summary: Correct inaccurate BlocksRemoved metric on DataNode side
                 Key: HDFS-16593
                 URL: https://issues.apache.org/jira/browse/HDFS-16593
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: ZanderXu
            Assignee: ZanderXu


When tracing the root cause of production issue, I found that the BlocksRemoved  metric on Datanode size was inaccurate.

{code:java}
case DatanodeProtocol.DNA_INVALIDATE:
      //
      // Some local block(s) are obsolete and can be 
      // safely garbage-collected.
      //
      Block toDelete[] = bcmd.getBlocks();
      try {
        // using global fsdataset
        dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
      } catch(IOException e) {
        // Exceptions caught here are not expected to be disk-related.
        throw e;
      }
      dn.metrics.incrBlocksRemoved(toDelete.length);
      break;
{code}

Because even if the invalidate method throws an exception, some blocks may have been successfully deleted internally.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org