You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "hfutatzhanghb (via GitHub)" <gi...@apache.org> on 2023/06/26 03:19:30 UTC

[GitHub] [hadoop] hfutatzhanghb commented on a diff in pull request #5776: HDFS-17058. Some statements in testChooseReplicaToDelete method seems useless.

hfutatzhanghb commented on code in PR #5776:
URL: https://github.com/apache/hadoop/pull/5776#discussion_r1241467041


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java:
##########
@@ -977,7 +977,6 @@ public void testChooseReplicaToDelete() throws Exception {
 
     //Even if this node has the most space, because the storage[5] has
     //the lowest it should be chosen in case of block delete.
-    storages[4].setRemainingForTests(100 * 1024 * 1024);

Review Comment:
   Hi, @zhangshuyan0 . Thanks a lot for reviewing.  In method TestReplicationPolicy#getDatanodeDescriptors, the datanodes[5] only has two storages: "storages[5]" and "storages[5]-extra". It does not have storages[4]. So, I think 
   `storages[4].setRemainingForTests(100 * 1024 * 1024);` seems useless.  If we want to make datanodes[5]'s total remaining largest, we could set "storages[5]-extra".  What's your opinions?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org