You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "lei w (Jira)" <ji...@apache.org> on 2021/06/30 09:01:00 UTC

[jira] [Created] (HDFS-16102) Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time

lei w created HDFS-16102:
----------------------------

             Summary: Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time 
                 Key: HDFS-16102
                 URL: https://issues.apache.org/jira/browse/HDFS-16102
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: namenode
            Reporter: lei w
            Assignee: lei w


The current logic in removeBlocksAssociatedTo(...) is as follows:
{code:java}
  void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
    providedStorageMap.removeDatanode(node);
    for (DatanodeStorageInfo storage : node.getStorageInfos()) {
      final Iterator<BlockInfo> it = storage.getBlockIterator();
      //add the BlockInfos to a new collection as the
      //returned iterator is not modifiable.
      Collection<BlockInfo> toRemove = new ArrayList<>();
      while (it.hasNext()) {
        toRemove.add(it.next()); // First iteration : to put blocks to another collection 
      }

      for (BlockInfo b : toRemove) {
        removeStoredBlock(b, node); // Another iteration : to remove blocks
      }
    }
  // ......
  }
{code}
 In fact , we can use the first iteration to achieve this logic , so should we remove the redundant iteration to save time?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org