You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "whbing (via GitHub)" <gi...@apache.org> on 2023/06/30 06:51:12 UTC

[GitHub] [hadoop] whbing commented on a diff in pull request #5759: HDFS-17052. Improve BlockPlacementPolicyRackFaultTolerant to avoid choose nodes failed when no enough Rack.

whbing commented on code in PR #5759:
URL: https://github.com/apache/hadoop/pull/5759#discussion_r1247501492


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java:
##########
@@ -192,11 +192,23 @@ private void chooseEvenlyFromRemainingRacks(Node writer,
       } finally {
         excludedNodes.addAll(newExcludeNodes);
       }
+      if (numResultsOflastChoose == results.size()) {
+        Map<String, Integer> nodesPerRack = new HashMap<>();
+        for (DatanodeStorageInfo dsInfo : results) {
+          String rackName = dsInfo.getDatanodeDescriptor().getNetworkLocation();
+          nodesPerRack.merge(rackName, 1, Integer::sum);
+        }
+        for (int numNodes : nodesPerRack.values()) {
+          if (numNodes > bestEffortMaxNodesPerRack) {
+            bestEffortMaxNodesPerRack = numNodes;
+          }

Review Comment:
   Line 201~203 get max value, maybe we can use the following code to make it more clear and concise.
   ```java
           bestEffortMaxNodesPerRack =
               Math.max(bestEffortMaxNodesPerRack, Collections.max(nodesPerRack.values()));
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org