You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Xiaoqiao He (Jira)" <ji...@apache.org> on 2022/09/05 11:38:00 UTC
[jira] [Resolved] (HDFS-16593) Correct inaccurate BlocksRemoved metric on DataNode side
[ https://issues.apache.org/jira/browse/HDFS-16593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiaoqiao He resolved HDFS-16593.
--------------------------------
Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Resolution: Fixed
Committed to trunk.
> Correct inaccurate BlocksRemoved metric on DataNode side
> --------------------------------------------------------
>
> Key: HDFS-16593
> URL: https://issues.apache.org/jira/browse/HDFS-16593
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Minor
> Labels: pull-request-available
> Fix For: 3.4.0
>
> Time Spent: 50m
> Remaining Estimate: 0h
>
> When tracing the root cause of production issue, I found that the BlocksRemoved metric on Datanode size was inaccurate.
> {code:java}
> case DatanodeProtocol.DNA_INVALIDATE:
> //
> // Some local block(s) are obsolete and can be
> // safely garbage-collected.
> //
> Block toDelete[] = bcmd.getBlocks();
> try {
> // using global fsdataset
> dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
> } catch(IOException e) {
> // Exceptions caught here are not expected to be disk-related.
> throw e;
> }
> dn.metrics.incrBlocksRemoved(toDelete.length);
> break;
> {code}
> Because even if the invalidate method throws an exception, some blocks may have been successfully deleted internally.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org