You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "lei w (Jira)" <ji...@apache.org> on 2023/12/11 13:32:00 UTC
[jira] [Resolved] (HDFS-16102) Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time
[ https://issues.apache.org/jira/browse/HDFS-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lei w resolved HDFS-16102.
--------------------------
Resolution: Invalid
> Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time
> --------------------------------------------------------------------------------------
>
> Key: HDFS-16102
> URL: https://issues.apache.org/jira/browse/HDFS-16102
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: lei w
> Assignee: lei w
> Priority: Minor
> Attachments: HDFS-16102.001.patch
>
>
> The current logic in removeBlocksAssociatedTo(...) is as follows:
> {code:java}
> void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
> providedStorageMap.removeDatanode(node);
> for (DatanodeStorageInfo storage : node.getStorageInfos()) {
> final Iterator<BlockInfo> it = storage.getBlockIterator();
> //add the BlockInfos to a new collection as the
> //returned iterator is not modifiable.
> Collection<BlockInfo> toRemove = new ArrayList<>();
> while (it.hasNext()) {
> toRemove.add(it.next()); // First iteration : to put blocks to another collection
> }
> for (BlockInfo b : toRemove) {
> removeStoredBlock(b, node); // Another iteration : to remove blocks
> }
> }
> // ......
> }
> {code}
> In fact , we can use the first iteration to achieve this logic , so should we remove the redundant iteration to save time and memory?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org