You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "zhangshuyan0 (via GitHub)" <gi...@apache.org> on 2023/06/28 03:43:22 UTC

[GitHub] [hadoop] zhangshuyan0 commented on a diff in pull request #5759: HDFS-17052. Erasure coding reconstruction failed when num of storageT…

zhangshuyan0 commented on code in PR #5759:
URL: https://github.com/apache/hadoop/pull/5759#discussion_r1244628377


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java:
##########
@@ -192,11 +192,19 @@ private void chooseEvenlyFromRemainingRacks(Node writer,
       } finally {
         excludedNodes.addAll(newExcludeNodes);
       }
+      if (numResultsOflastChoose == results.size()) {
+        Map<String, Integer> nodesPerRack = new HashMap<>();
+        for (DatanodeStorageInfo dsInfo : results) {
+          String rackName = dsInfo.getDatanodeDescriptor().getNetworkLocation();
+          nodesPerRack.merge(rackName, 1, Integer::sum);
+        }
+        bestEffortMaxNodesPerRack = Collections.max(nodesPerRack.values());

Review Comment:
   Is it possible to introduce infinite loops here? If each rack already has one chosen node and `bestEffortMaxNodesPerRack` is 2, and no datanode can be chosen now, then `bestEffortMaxNodesPerRack`  will change to 1 after line 201, which may cause an infinite loop. So the calculation of the maximum value should consider the old value of `bestEffortMaxNodesPerRack` to insure increase.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org