You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/06/07 14:36:00 UTC

[jira] [Work logged] (HDFS-16593) Correct inaccurate BlocksRemoved metric on DataNode side

     [ https://issues.apache.org/jira/browse/HDFS-16593?focusedWorklogId=779132&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-779132 ]

ASF GitHub Bot logged work on HDFS-16593:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/Jun/22 14:35
            Start Date: 07/Jun/22 14:35
    Worklog Time Spent: 10m 
      Work Description: ZanderXu commented on PR #4353:
URL: https://github.com/apache/hadoop/pull/4353#issuecomment-1148761554

   @Hexiaoqiao Could you help me review this patch? The failed UTs not caused by this modification, and has been solved in other jira.




Issue Time Tracking
-------------------

    Worklog Id:     (was: 779132)
    Time Spent: 0.5h  (was: 20m)

> Correct inaccurate BlocksRemoved metric on DataNode side
> --------------------------------------------------------
>
>                 Key: HDFS-16593
>                 URL: https://issues.apache.org/jira/browse/HDFS-16593
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When tracing the root cause of production issue, I found that the BlocksRemoved  metric on Datanode size was inaccurate.
> {code:java}
> case DatanodeProtocol.DNA_INVALIDATE:
>       //
>       // Some local block(s) are obsolete and can be 
>       // safely garbage-collected.
>       //
>       Block toDelete[] = bcmd.getBlocks();
>       try {
>         // using global fsdataset
>         dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
>       } catch(IOException e) {
>         // Exceptions caught here are not expected to be disk-related.
>         throw e;
>       }
>       dn.metrics.incrBlocksRemoved(toDelete.length);
>       break;
> {code}
> Because even if the invalidate method throws an exception, some blocks may have been successfully deleted internally.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org