You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/01/14 02:43:13 UTC

[GitHub] [hadoop] Jackson-Wang-7 commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Jackson-Wang-7 commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784470983



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space

Review comment:
       I have modified it. Please review.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org