You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/09/29 03:38:11 UTC

[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

GlenGeng commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496348325



##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
##########
@@ -40,6 +43,8 @@
   private long lastStatsUpdatedTime;
 
   private List<StorageReportProto> storageReports;
+  private List<StorageContainerDatanodeProtocolProtos.

Review comment:
       may replace `StorageContainerDatanodeProtocolProtos.MetadataStorageReportProto` with `MetadataStorageReportProto`

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
##########
@@ -94,6 +100,22 @@ public void updateStorageReports(List<StorageReportProto> reports) {
     }
   }
 
+  /**
+   * Updates the datanode storage reports.

Review comment:
       stale java doc, suggestion: Updates the datanode metadata storage reports.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
##########
@@ -530,6 +540,43 @@ public int getNumHealthyVolumes(List<DatanodeDetails> dnList) {
     return Collections.max(volumeCountList);
   }
 
+  /**
+   * Returns the pipeline limit for the datanode.
+   * if the datanode pipeline limit is set, consider that as the max
+   * pipeline limit.
+   * In case, the pipeline limit is not set, the max pipeline limit
+   * will be based on the no of raft log volume reported and provided
+   * that it has atleast one healthy data volume.
+   */
+  @Override
+  public int maxPipelineLimit(DatanodeDetails dn) {
+    try {
+      if (heavyNodeCriteria > 0) {
+        return heavyNodeCriteria;
+      } else if (nodeStateManager.getNode(dn).getHealthyVolumeCount() > 0) {
+        return numPipelinesPerRaftLogDisk *
+            nodeStateManager.getNode(dn).getRaftLogVolumeCount();
+      }
+    } catch (NodeNotFoundException e) {
+      LOG.warn("Cannot generate NodeStat, datanode {} not found.",
+          dn.getUuid());
+    }
+    return 0;
+  }
+
+  /**
+   * Returns the pipeline limit for set of datanodes.
+   */
+  @Override
+  public int maxPipelineLimit(List<DatanodeDetails> dnList) {

Review comment:
       method name is `maxPipelineLimit`, but the logic calculates the min, which is a little bit weird. How about `minPipelineLimit `

##########
File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
##########
@@ -308,6 +308,10 @@
       OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY =
       "ozone.scm.keyvalue.container.deletion-choosing.policy";
 
+  public static final String OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK =

Review comment:
       How about `OZONE_SCM_PIPELINE_PER_METADATA_DISK` ?  
   There are both raft log disk and meta data storage report existing in the code context, they are similar to each other, bring in some redundancy.
   BTW, the former one may lead misunderstanding. One might think that each raft log disk contains one raft log, which is straightforwad, nevertheless, the raft log disk and the raft log is a OneToMany relationship.

##########
File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##########
@@ -101,12 +103,23 @@ public SCMContainerManager(
     this.numContainerPerVolume = conf
         .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
             ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+    this.numPipelinesPerRaftLogDisk = conf
+        .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+            ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
     loadExistingContainers();
 
     scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+    int totalContainerCountPerDn = numContainerPerVolume *
+        pipelineManager.getNumHealthyVolumes(pipeline);

Review comment:
       `pipelineManager.getMaxHealthyVolumeNum(pipeline) / pipelineManager.getMinPipelineLimit(pipeline)` may be more expressive.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org