You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/03/23 06:12:48 UTC

[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #668: HDDS-3139 Pipeline placement should max out pipeline usage

timmylicheng commented on a change in pull request #668: HDDS-3139 Pipeline placement should max out pipeline usage
URL: https://github.com/apache/hadoop-ozone/pull/668#discussion_r396229970
 
 

 ##########
 File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##########
 @@ -315,6 +314,50 @@ DatanodeDetails fallBackPickNodes(
     return results;
   }
 
+  private DatanodeDetails randomPick(List<DatanodeDetails> healthyNodes) {
+    DatanodeDetails datanodeDetails;
+    int firstNodeNdx = getRand().nextInt(healthyNodes.size());
+    int secondNodeNdx = getRand().nextInt(healthyNodes.size());
+
+    // There is a possibility that both numbers will be same.
+    // if that is so, we just return the node.
+    if (firstNodeNdx == secondNodeNdx) {
+      datanodeDetails = healthyNodes.get(firstNodeNdx);
+    } else {
+      DatanodeDetails firstNodeDetails = healthyNodes.get(firstNodeNdx);
+      DatanodeDetails secondNodeDetails = healthyNodes.get(secondNodeNdx);
+      datanodeDetails = nodeManager.getPipelinesCount(firstNodeDetails)
+          >= nodeManager.getPipelinesCount(secondNodeDetails)
+          ? secondNodeDetails : firstNodeDetails;
+    }
+    return datanodeDetails;
+  }
+
+  private List<DatanodeDetails> getLowerLoadNodes(
+      List<DatanodeDetails> nodes, int num) {
+    int maxPipelineUsage = nodes.size() * heavyNodeCriteria /
+        HddsProtos.ReplicationFactor.THREE.getNumber();
+    return nodes.stream()
+        // Skip the nodes which exceeds the load limit.
+        .filter(p -> nodeManager.getPipelinesCount(p) < num - maxPipelineUsage)
+        .collect(Collectors.toList());
+  }
+
+  private DatanodeDetails lowerLoadPick(List<DatanodeDetails> healthyNodes) {
+    int curPipelineCounts =  stateManager
+        .getPipelines(HddsProtos.ReplicationType.RATIS).size();
+    DatanodeDetails datanodeDetails;
+    List<DatanodeDetails> nodes = getLowerLoadNodes(
+        healthyNodes, curPipelineCounts);
+    if (nodes.isEmpty()) {
+      // random pick node if nodes load is at same level.
+      datanodeDetails = randomPick(healthyNodes);
+    } else {
+      datanodeDetails = nodes.stream().findFirst().get();
 
 Review comment:
   Sorting the node list would be expensive for large cluster. That's the reason why I choose to do this 'water mark' filter for selecting nodes with lower load. 
   
   I can def do a findAny() kinda thing for random pick.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org