You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "zhangshuyan0 (via GitHub)" <gi...@apache.org> on 2023/06/20 10:00:57 UTC

[GitHub] [hadoop] zhangshuyan0 commented on a diff in pull request #5760: HDFS-17054. Erasure coding: optimize checkReplicaOnStorage method to avoid regarding all replicas on one datanode as corrupt repeatly.

zhangshuyan0 commented on code in PR #5760:
URL: https://github.com/apache/hadoop/pull/5760#discussion_r1235035757


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -4544,25 +4546,32 @@ public NumberReplicas countNodes(BlockInfo b) {
   NumberReplicas countNodes(BlockInfo b, boolean inStartupSafeMode) {
     NumberReplicas numberReplicas = new NumberReplicas();
     Collection<DatanodeDescriptor> nodesCorrupt = corruptReplicas.getNodes(b);
+    HashSet<DatanodeDescriptor> haveComputedAsCorrupted = null;
     if (b.isStriped()) {
+      haveComputedAsCorrupted = new HashSet<>();
       countReplicasForStripedBlock(numberReplicas, (BlockInfoStriped) b,
-          nodesCorrupt, inStartupSafeMode);
+          nodesCorrupt, inStartupSafeMode, haveComputedAsCorrupted);
     } else {
       for (DatanodeStorageInfo storage : blocksMap.getStorages(b)) {
         checkReplicaOnStorage(numberReplicas, b, storage, nodesCorrupt,
-            inStartupSafeMode);
+            inStartupSafeMode, haveComputedAsCorrupted);
       }
     }
     return numberReplicas;
   }
 
   private StoredReplicaState checkReplicaOnStorage(NumberReplicas counters,
       BlockInfo b, DatanodeStorageInfo storage,
-      Collection<DatanodeDescriptor> nodesCorrupt, boolean inStartupSafeMode) {
+      Collection<DatanodeDescriptor> nodesCorrupt, boolean inStartupSafeMode,
+      HashSet<DatanodeDescriptor> haveComputedAsCorrupted) {
     final StoredReplicaState s;
     if (storage.getState() == State.NORMAL) {
       final DatanodeDescriptor node = storage.getDatanodeDescriptor();
-      if (nodesCorrupt != null && nodesCorrupt.contains(node)) {
+      if (nodesCorrupt != null && nodesCorrupt.contains(node) &&
+          (haveComputedAsCorrupted == null || !haveComputedAsCorrupted.contains(node))) {
+        if (haveComputedAsCorrupted != null) {
+          haveComputedAsCorrupted.add(node);

Review Comment:
   If I understand your code correctly, if the same block group has two internal blocks on the same datanode, then you will only calculate one. IMO, the current implementation of `CorruptReplicasMap` does not record which specific internal block on the datanode was corrupt, how could you confirm that there is only one internal block corrupt?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org