You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/11/26 05:11:04 UTC

[GitHub] [hadoop] lfxy commented on a diff in pull request #5143: HDFS-16846. EC: Only EC blocks should be effected by max-streams-hard-limit configuration

lfxy commented on code in PR #5143:
URL: https://github.com/apache/hadoop/pull/5143#discussion_r1032739690


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##########
@@ -1825,28 +1825,48 @@ public DatanodeCommand[] handleHeartbeat(DatanodeRegistration nodeReg,
     // Allocate _approximately_ maxTransfers pending tasks to DataNode.
     // NN chooses pending tasks based on the ratio between the lengths of
     // replication and erasure-coded block queues.
-    int totalReplicateBlocks = nodeinfo.getNumberOfReplicateBlocks();
-    int totalECBlocks = nodeinfo.getNumberOfBlocksToBeErasureCoded();
-    int totalBlocks = totalReplicateBlocks + totalECBlocks;
+    int replicationBlocks = nodeinfo.getNumberOfReplicateBlocks();
+    int ecReplicatedBlocks = nodeinfo.getNumberOfECReplicatedBlocks();
+    int ecReconstructedBlocks = nodeinfo.getNumberOfBlocksToBeErasureCoded();
+    int totalBlocks = replicationBlocks + ecReplicatedBlocks + ecReconstructedBlocks;
     if (totalBlocks > 0) {
-      int maxTransfers;
+      int maxReplicationTransfers = blockManager.getMaxReplicationStreams()
+              - xmitsInProgress;
+      int maxECReplicatedTransfers;
+      int maxECReconstructedTransfers;

Review Comment:
   Yes, it's right.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org